2009-06-03 04:59:57 +08:00
|
|
|
/*
|
2009-06-03 05:37:05 +08:00
|
|
|
* builtin-record.c
|
|
|
|
*
|
|
|
|
* Builtin record command: Record the profile of a workload
|
|
|
|
* (or a CPU, or a PID) into the perf.data output file - for
|
|
|
|
* later analysis via perf report.
|
2009-06-03 04:59:57 +08:00
|
|
|
*/
|
2009-05-27 15:10:38 +08:00
|
|
|
#include "builtin.h"
|
2009-06-03 05:37:05 +08:00
|
|
|
|
|
|
|
#include "perf.h"
|
|
|
|
|
2010-02-04 02:52:05 +08:00
|
|
|
#include "util/build-id.h"
|
2009-05-02 00:29:57 +08:00
|
|
|
#include "util/util.h"
|
2015-12-15 23:39:39 +08:00
|
|
|
#include <subcmd/parse-options.h>
|
2009-05-26 17:10:09 +08:00
|
|
|
#include "util/parse-events.h"
|
2016-06-23 16:55:17 +08:00
|
|
|
#include "util/config.h"
|
2009-05-02 00:29:57 +08:00
|
|
|
|
2014-10-10 03:12:24 +08:00
|
|
|
#include "util/callchain.h"
|
2014-10-17 23:17:40 +08:00
|
|
|
#include "util/cgroup.h"
|
2009-06-25 23:05:54 +08:00
|
|
|
#include "util/header.h"
|
2009-08-12 17:07:25 +08:00
|
|
|
#include "util/event.h"
|
2011-01-12 06:56:53 +08:00
|
|
|
#include "util/evlist.h"
|
2011-01-04 02:39:04 +08:00
|
|
|
#include "util/evsel.h"
|
2009-08-17 04:05:48 +08:00
|
|
|
#include "util/debug.h"
|
2009-12-12 07:24:02 +08:00
|
|
|
#include "util/session.h"
|
2011-11-28 18:30:20 +08:00
|
|
|
#include "util/tool.h"
|
perf symbols: Use the buildids if present
With this change 'perf record' will intercept PERF_RECORD_MMAP
calls, creating a linked list of DSOs, then when the session
finishes, it will traverse this list and read the buildids,
stashing them at the end of the file and will set up a new
feature bit in the header bitmask.
'perf report' will then notice this feature and populate the
'dsos' list and set the build ids.
When reading the symtabs it will refuse to load from a file that
doesn't have the same build id. This improves the
reliability of the profiler output, as symbols and profiling
data is more guaranteed to match.
Example:
[root@doppio ~]# perf report | head
/home/acme/bin/perf with build id b1ea544ac3746e7538972548a09aadecc5753868 not found, continuing without symbols
# Samples: 2621434559
#
# Overhead Command Shared Object Symbol
# ........ ............... ............................. ......
#
7.91% init [kernel] [k] read_hpet
7.64% init [kernel] [k] mwait_idle_with_hints
7.60% swapper [kernel] [k] read_hpet
7.60% swapper [kernel] [k] mwait_idle_with_hints
3.65% init [kernel] [k] 0xffffffffa02339d9
[root@doppio ~]#
In this case the 'perf' binary was an older one, vanished,
so its symbols probably wouldn't match or would cause subtly
different (and misleading) output.
Next patches will support the kernel as well, reading the build
id notes for it and the modules from /sys.
Another patch should also introduce a new plumbing command:
'perf list-buildids'
that will then be used in porcelain that is distro specific to
fetch -debuginfo packages where such buildids are present. This
will in turn allow for one to run 'perf record' in one machine
and 'perf report' in another.
Future work on having the buildid sent directly from the kernel
in the PERF_RECORD_MMAP event is needed to close races, as the
DSO can be changed during a 'perf record' session, but this
patch at least helps with non-corner cases and current/older
kernels.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Frank Ch. Eigler <fche@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jason Baron <jbaron@redhat.com>
Cc: Jim Keniston <jkenisto@us.ibm.com>
Cc: K. Prasad <prasad@linux.vnet.ibm.com>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Roland McGrath <roland@redhat.com>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
LKML-Reference: <1257367843-26224-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-05 04:50:43 +08:00
|
|
|
#include "util/symbol.h"
|
perf tools: Fix sparse CPU numbering related bugs
At present, the perf subcommands that do system-wide monitoring
(perf stat, perf record and perf top) don't work properly unless
the online cpus are numbered 0, 1, ..., N-1. These tools ask
for the number of online cpus with sysconf(_SC_NPROCESSORS_ONLN)
and then try to create events for cpus 0, 1, ..., N-1.
This creates problems for systems where the online cpus are
numbered sparsely. For example, a POWER6 system in
single-threaded mode (i.e. only running 1 hardware thread per
core) will have only even-numbered cpus online.
This fixes the problem by reading the /sys/devices/system/cpu/online
file to find out which cpus are online. The code that does that is in
tools/perf/util/cpumap.[ch], and consists of a read_cpu_map()
function that sets up a cpumap[] array and returns the number of
online cpus. If /sys/devices/system/cpu/online can't be read or
can't be parsed successfully, it falls back to using sysconf to
ask how many cpus are online and sets up an identity map in cpumap[].
The perf record, perf stat and perf top code then calls
read_cpu_map() in the system-wide monitoring case (instead of
sysconf) and uses cpumap[] to get the cpu numbers to pass to
perf_event_open.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Cc: Anton Blanchard <anton@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
LKML-Reference: <20100310093609.GA3959@brick.ozlabs.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-03-10 17:36:09 +08:00
|
|
|
#include "util/cpumap.h"
|
2011-01-19 01:15:24 +08:00
|
|
|
#include "util/thread_map.h"
|
2013-10-15 22:27:32 +08:00
|
|
|
#include "util/data.h"
|
perf record: Add ability to name registers to record
This patch modifies the -I/--int-regs option to enablepassing the name
of the registers to sample on interrupt. Registers can be specified by
their symbolic names. For instance on x86, --intr-regs=ax,si.
The motivation is to reduce the size of the perf.data file and the
overhead of sampling by only collecting the registers useful to a
specific analysis. For instance, for value profiling, sampling only the
registers used to passed arguements to functions.
With no parameter, the --intr-regs still records all possible registers
based on the architecture.
To name registers, it is necessary to use the long form of the option,
i.e., --intr-regs:
$ perf record --intr-regs=si,di,r8,r9 .....
To record any possible registers:
$ perf record -I .....
$ perf report --intr-regs ...
To display the register, one can use perf report -D
To list the available registers:
$ perf record --intr-regs=\?
available registers: AX BX CX DX SI DI BP SP IP FLAGS CS SS R8 R9 R10 R11 R12 R13 R14 R15
Signed-off-by: Stephane Eranian <eranian@google.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1441039273-16260-4-git-send-email-eranian@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-09-01 00:41:12 +08:00
|
|
|
#include "util/perf_regs.h"
|
2015-04-09 23:53:45 +08:00
|
|
|
#include "util/auxtrace.h"
|
2016-03-08 16:38:44 +08:00
|
|
|
#include "util/tsc.h"
|
2015-05-28 01:51:51 +08:00
|
|
|
#include "util/parse-branch-options.h"
|
perf record: Add ability to name registers to record
This patch modifies the -I/--int-regs option to enablepassing the name
of the registers to sample on interrupt. Registers can be specified by
their symbolic names. For instance on x86, --intr-regs=ax,si.
The motivation is to reduce the size of the perf.data file and the
overhead of sampling by only collecting the registers useful to a
specific analysis. For instance, for value profiling, sampling only the
registers used to passed arguements to functions.
With no parameter, the --intr-regs still records all possible registers
based on the architecture.
To name registers, it is necessary to use the long form of the option,
i.e., --intr-regs:
$ perf record --intr-regs=si,di,r8,r9 .....
To record any possible registers:
$ perf record -I .....
$ perf report --intr-regs ...
To display the register, one can use perf report -D
To list the available registers:
$ perf record --intr-regs=\?
available registers: AX BX CX DX SI DI BP SP IP FLAGS CS SS R8 R9 R10 R11 R12 R13 R14 R15
Signed-off-by: Stephane Eranian <eranian@google.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1441039273-16260-4-git-send-email-eranian@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-09-01 00:41:12 +08:00
|
|
|
#include "util/parse-regs-options.h"
|
2015-10-14 20:41:19 +08:00
|
|
|
#include "util/llvm-utils.h"
|
perf record: Apply config to BPF objects before recording
bpf__apply_obj_config() is introduced as the core API to apply object
config options to all BPF objects. This patch also does the real work
for setting values for BPF_MAP_TYPE_PERF_ARRAY maps by inserting value
stored in map's private field into the BPF map.
This patch is required because we are not always able to set all BPF
config during parsing. Further patch will set events created by perf to
BPF_MAP_TYPE_PERF_EVENT_ARRAY maps, which is not exist until
perf_evsel__open().
bpf_map_foreach_key() is introduced to iterate over each key needs to be
configured. This function would be extended to support more map types
and different key settings.
In perf record, before start recording, call bpf__apply_config() to turn
on all BPF config options.
Test result:
# cat ./test_bpf_map_1.c
/************************ BEGIN **************************/
#include <uapi/linux/bpf.h>
#define SEC(NAME) __attribute__((section(NAME), used))
struct bpf_map_def {
unsigned int type;
unsigned int key_size;
unsigned int value_size;
unsigned int max_entries;
};
static void *(*map_lookup_elem)(struct bpf_map_def *, void *) =
(void *)BPF_FUNC_map_lookup_elem;
static int (*trace_printk)(const char *fmt, int fmt_size, ...) =
(void *)BPF_FUNC_trace_printk;
struct bpf_map_def SEC("maps") channel = {
.type = BPF_MAP_TYPE_ARRAY,
.key_size = sizeof(int),
.value_size = sizeof(int),
.max_entries = 1,
};
SEC("func=sys_nanosleep")
int func(void *ctx)
{
int key = 0;
char fmt[] = "%d\n";
int *pval = map_lookup_elem(&channel, &key);
if (!pval)
return 0;
trace_printk(fmt, sizeof(fmt), *pval);
return 0;
}
char _license[] SEC("license") = "GPL";
int _version SEC("version") = LINUX_VERSION_CODE;
/************************* END ***************************/
# echo "" > /sys/kernel/debug/tracing/trace
# ./perf record -e './test_bpf_map_1.c/map:channel.value=11/' usleep 10
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.012 MB perf.data ]
# cat /sys/kernel/debug/tracing/trace
# tracer: nop
#
# entries-in-buffer/entries-written: 1/1 #P:8
[SNIP]
# TASK-PID CPU# |||| TIMESTAMP FUNCTION
# | | | |||| | |
usleep-18593 [007] d... 2394714.395539: : 11
# ./perf record -e './test_bpf_map_1.c/map:channel.value=101/' usleep 10
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.012 MB perf.data ]
# cat /sys/kernel/debug/tracing/trace
# tracer: nop
#
# entries-in-buffer/entries-written: 1/1 #P:8
[SNIP]
# TASK-PID CPU# |||| TIMESTAMP FUNCTION
# | | | |||| | |
usleep-18593 [007] d... 2394714.395539: : 11
usleep-19000 [006] d... 2394831.057840: : 101
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Cody P Schafer <dev@codyps.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jeremie Galarneau <jeremie.galarneau@efficios.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kirill Smelkov <kirr@nexedi.com>
Cc: Li Zefan <lizefan@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1456132275-98875-6-git-send-email-wangnan0@huawei.com
Signed-off-by: He Kuang <hekuang@huawei.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-22 17:10:32 +08:00
|
|
|
#include "util/bpf-loader.h"
|
2016-04-21 02:59:49 +08:00
|
|
|
#include "util/trigger.h"
|
2016-02-26 17:32:06 +08:00
|
|
|
#include "asm/bug.h"
|
2009-06-25 23:05:54 +08:00
|
|
|
|
2009-06-02 21:52:24 +08:00
|
|
|
#include <unistd.h>
|
2009-04-08 21:01:31 +08:00
|
|
|
#include <sched.h>
|
2010-05-19 05:29:23 +08:00
|
|
|
#include <sys/mman.h>
|
2016-05-23 15:13:39 +08:00
|
|
|
#include <asm/bug.h>
|
2009-04-08 21:01:31 +08:00
|
|
|
|
2012-10-08 14:43:26 +08:00
|
|
|
|
2013-12-20 01:38:03 +08:00
|
|
|
struct record {
|
2011-11-28 18:30:20 +08:00
|
|
|
struct perf_tool tool;
|
2013-12-20 01:43:45 +08:00
|
|
|
struct record_opts opts;
|
2011-11-25 18:19:45 +08:00
|
|
|
u64 bytes_written;
|
2013-10-15 22:27:32 +08:00
|
|
|
struct perf_data_file file;
|
2015-04-09 23:53:45 +08:00
|
|
|
struct auxtrace_record *itr;
|
2011-11-25 18:19:45 +08:00
|
|
|
struct perf_evlist *evlist;
|
|
|
|
struct perf_session *session;
|
|
|
|
const char *progname;
|
|
|
|
int realtime_prio;
|
|
|
|
bool no_buildid;
|
2016-01-25 17:56:19 +08:00
|
|
|
bool no_buildid_set;
|
2011-11-25 18:19:45 +08:00
|
|
|
bool no_buildid_cache;
|
2016-01-25 17:56:19 +08:00
|
|
|
bool no_buildid_cache_set;
|
2016-01-11 21:37:09 +08:00
|
|
|
bool buildid_all;
|
2016-04-13 16:21:07 +08:00
|
|
|
bool timestamp_filename;
|
2016-04-21 02:59:50 +08:00
|
|
|
bool switch_output;
|
perf record: Change 'record.samples' type to unsigned long long
When run "perf record -e", the number of samples showed up is wrong on some
32 bit systems, i.e. powerpc and arm.
For example, run the below commands on 32 bit powerpc:
perf probe -x /lib/libc.so.6 malloc
perf record -e probe_libc:malloc -a ls perf.data
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.036 MB perf.data (13829241621624967218 samples) ]
Actually, "perf script" just shows 21 samples. The number of samples is also
absurd since samples is long type, but it is printed as PRIu64.
Build test ran on x86-64, x86, aarch64, arm, mips, ppc and ppc64.
Signed-off-by: Yang Shi <yang.shi@linaro.org>
Cc: linaro-kernel@lists.linaro.org
Link: http://lkml.kernel.org/r/1443563383-4064-1-git-send-email-yang.shi@linaro.org
[ Bumped the 'hits' var used together with record.samples to 'unsigned long long' too ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-09-30 05:49:43 +08:00
|
|
|
unsigned long long samples;
|
2011-11-09 00:41:57 +08:00
|
|
|
};
|
2009-06-06 15:58:57 +08:00
|
|
|
|
2013-12-20 01:38:03 +08:00
|
|
|
static int record__write(struct record *rec, void *bf, size_t size)
|
2009-06-19 05:22:55 +08:00
|
|
|
{
|
2013-12-20 01:26:26 +08:00
|
|
|
if (perf_data_file__write(rec->session->file, bf, size) < 0) {
|
2013-11-22 20:11:24 +08:00
|
|
|
pr_err("failed to write perf data, error: %m\n");
|
|
|
|
return -1;
|
2009-06-19 05:22:55 +08:00
|
|
|
}
|
2012-08-27 02:24:47 +08:00
|
|
|
|
2013-12-20 01:26:26 +08:00
|
|
|
rec->bytes_written += size;
|
2012-08-27 02:24:47 +08:00
|
|
|
return 0;
|
2009-06-19 05:22:55 +08:00
|
|
|
}
|
|
|
|
|
2011-11-28 18:30:20 +08:00
|
|
|
static int process_synthesized_event(struct perf_tool *tool,
|
2011-11-25 18:19:45 +08:00
|
|
|
union perf_event *event,
|
2012-09-11 06:15:03 +08:00
|
|
|
struct perf_sample *sample __maybe_unused,
|
|
|
|
struct machine *machine __maybe_unused)
|
2009-10-27 05:23:18 +08:00
|
|
|
{
|
2013-12-20 01:38:03 +08:00
|
|
|
struct record *rec = container_of(tool, struct record, tool);
|
|
|
|
return record__write(rec, event, event->header.size);
|
2009-10-27 05:23:18 +08:00
|
|
|
}
|
|
|
|
|
2016-05-23 15:13:41 +08:00
|
|
|
static int
|
|
|
|
backward_rb_find_range(void *buf, int mask, u64 head, u64 *start, u64 *end)
|
|
|
|
{
|
|
|
|
struct perf_event_header *pheader;
|
|
|
|
u64 evt_head = head;
|
|
|
|
int size = mask + 1;
|
|
|
|
|
|
|
|
pr_debug2("backward_rb_find_range: buf=%p, head=%"PRIx64"\n", buf, head);
|
|
|
|
pheader = (struct perf_event_header *)(buf + (head & mask));
|
|
|
|
*start = head;
|
|
|
|
while (true) {
|
|
|
|
if (evt_head - head >= (unsigned int)size) {
|
|
|
|
pr_debug("Finshed reading backward ring buffer: rewind\n");
|
|
|
|
if (evt_head - head > (unsigned int)size)
|
|
|
|
evt_head -= pheader->size;
|
|
|
|
*end = evt_head;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
pheader = (struct perf_event_header *)(buf + (evt_head & mask));
|
|
|
|
|
|
|
|
if (pheader->size == 0) {
|
|
|
|
pr_debug("Finshed reading backward ring buffer: get start\n");
|
|
|
|
*end = evt_head;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
evt_head += pheader->size;
|
|
|
|
pr_debug3("move evt_head: %"PRIx64"\n", evt_head);
|
|
|
|
}
|
|
|
|
WARN_ONCE(1, "Shouldn't get here\n");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
rb_find_range(struct perf_evlist *evlist,
|
|
|
|
void *data, int mask, u64 head, u64 old,
|
|
|
|
u64 *start, u64 *end)
|
|
|
|
{
|
|
|
|
if (!evlist->backward) {
|
|
|
|
*start = old;
|
|
|
|
*end = head;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
return backward_rb_find_range(data, mask, head, start, end);
|
|
|
|
}
|
|
|
|
|
2014-09-18 03:42:58 +08:00
|
|
|
static int record__mmap_read(struct record *rec, int idx)
|
2009-04-08 21:01:31 +08:00
|
|
|
{
|
2014-09-18 03:42:58 +08:00
|
|
|
struct perf_mmap *md = &rec->evlist->mmap[idx];
|
2015-04-07 23:20:37 +08:00
|
|
|
u64 head = perf_mmap__read_head(md);
|
|
|
|
u64 old = md->prev;
|
2016-05-23 15:13:40 +08:00
|
|
|
u64 end = head, start = old;
|
2013-09-13 00:39:35 +08:00
|
|
|
unsigned char *data = md->base + page_size;
|
2009-04-08 21:01:31 +08:00
|
|
|
unsigned long size;
|
|
|
|
void *buf;
|
2012-08-27 02:24:47 +08:00
|
|
|
int rc = 0;
|
2009-04-08 21:01:31 +08:00
|
|
|
|
2016-05-23 15:13:41 +08:00
|
|
|
if (rb_find_range(rec->evlist, data, md->mask, head,
|
|
|
|
old, &start, &end))
|
|
|
|
return -1;
|
|
|
|
|
2016-05-23 15:13:40 +08:00
|
|
|
if (start == end)
|
2012-08-27 02:24:47 +08:00
|
|
|
return 0;
|
2011-01-29 00:49:19 +08:00
|
|
|
|
2011-11-25 18:19:45 +08:00
|
|
|
rec->samples++;
|
2009-04-08 21:01:31 +08:00
|
|
|
|
2016-05-23 15:13:40 +08:00
|
|
|
size = end - start;
|
2016-05-23 15:13:39 +08:00
|
|
|
if (size > (unsigned long)(md->mask) + 1) {
|
|
|
|
WARN_ONCE(1, "failed to keep up with mmap data. (warn only once)\n");
|
|
|
|
|
|
|
|
md->prev = head;
|
|
|
|
perf_evlist__mmap_consume(rec->evlist, idx);
|
|
|
|
return 0;
|
|
|
|
}
|
2009-04-08 21:01:31 +08:00
|
|
|
|
2016-05-23 15:13:40 +08:00
|
|
|
if ((start & md->mask) + size != (end & md->mask)) {
|
|
|
|
buf = &data[start & md->mask];
|
|
|
|
size = md->mask + 1 - (start & md->mask);
|
|
|
|
start += size;
|
2009-06-04 01:27:19 +08:00
|
|
|
|
2013-12-20 01:38:03 +08:00
|
|
|
if (record__write(rec, buf, size) < 0) {
|
2012-08-27 02:24:47 +08:00
|
|
|
rc = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
2009-04-08 21:01:31 +08:00
|
|
|
}
|
|
|
|
|
2016-05-23 15:13:40 +08:00
|
|
|
buf = &data[start & md->mask];
|
|
|
|
size = end - start;
|
|
|
|
start += size;
|
2009-06-04 01:27:19 +08:00
|
|
|
|
2013-12-20 01:38:03 +08:00
|
|
|
if (record__write(rec, buf, size) < 0) {
|
2012-08-27 02:24:47 +08:00
|
|
|
rc = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
2009-04-08 21:01:31 +08:00
|
|
|
|
2016-05-23 15:13:40 +08:00
|
|
|
md->prev = head;
|
2014-09-18 03:42:58 +08:00
|
|
|
perf_evlist__mmap_consume(rec->evlist, idx);
|
2012-08-27 02:24:47 +08:00
|
|
|
out:
|
|
|
|
return rc;
|
2009-04-08 21:01:31 +08:00
|
|
|
}
|
|
|
|
|
2015-04-30 22:37:32 +08:00
|
|
|
static volatile int done;
|
|
|
|
static volatile int signr = -1;
|
|
|
|
static volatile int child_finished;
|
2016-04-13 16:21:06 +08:00
|
|
|
|
2015-04-30 22:37:32 +08:00
|
|
|
static volatile int auxtrace_record__snapshot_started;
|
2016-04-21 02:59:49 +08:00
|
|
|
static DEFINE_TRIGGER(auxtrace_snapshot_trigger);
|
2016-04-21 02:59:50 +08:00
|
|
|
static DEFINE_TRIGGER(switch_output_trigger);
|
2015-04-30 22:37:32 +08:00
|
|
|
|
|
|
|
static void sig_handler(int sig)
|
|
|
|
{
|
|
|
|
if (sig == SIGCHLD)
|
|
|
|
child_finished = 1;
|
|
|
|
else
|
|
|
|
signr = sig;
|
|
|
|
|
|
|
|
done = 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void record__sig_exit(void)
|
|
|
|
{
|
|
|
|
if (signr == -1)
|
|
|
|
return;
|
|
|
|
|
|
|
|
signal(signr, SIG_DFL);
|
|
|
|
raise(signr);
|
|
|
|
}
|
|
|
|
|
2015-04-30 22:37:27 +08:00
|
|
|
#ifdef HAVE_AUXTRACE_SUPPORT
|
|
|
|
|
2015-04-09 23:53:45 +08:00
|
|
|
static int record__process_auxtrace(struct perf_tool *tool,
|
|
|
|
union perf_event *event, void *data1,
|
|
|
|
size_t len1, void *data2, size_t len2)
|
|
|
|
{
|
|
|
|
struct record *rec = container_of(tool, struct record, tool);
|
2015-04-30 22:37:25 +08:00
|
|
|
struct perf_data_file *file = &rec->file;
|
2015-04-09 23:53:45 +08:00
|
|
|
size_t padding;
|
|
|
|
u8 pad[8] = {0};
|
|
|
|
|
2015-04-30 22:37:25 +08:00
|
|
|
if (!perf_data_file__is_pipe(file)) {
|
|
|
|
off_t file_offset;
|
|
|
|
int fd = perf_data_file__fd(file);
|
|
|
|
int err;
|
|
|
|
|
|
|
|
file_offset = lseek(fd, 0, SEEK_CUR);
|
|
|
|
if (file_offset == -1)
|
|
|
|
return -1;
|
|
|
|
err = auxtrace_index__auxtrace_event(&rec->session->auxtrace_index,
|
|
|
|
event, file_offset);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2015-04-09 23:53:45 +08:00
|
|
|
/* event.auxtrace.size includes padding, see __auxtrace_mmap__read() */
|
|
|
|
padding = (len1 + len2) & 7;
|
|
|
|
if (padding)
|
|
|
|
padding = 8 - padding;
|
|
|
|
|
|
|
|
record__write(rec, event, event->header.size);
|
|
|
|
record__write(rec, data1, len1);
|
|
|
|
if (len2)
|
|
|
|
record__write(rec, data2, len2);
|
|
|
|
record__write(rec, &pad, padding);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int record__auxtrace_mmap_read(struct record *rec,
|
|
|
|
struct auxtrace_mmap *mm)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = auxtrace_mmap__read(mm, rec->itr, &rec->tool,
|
|
|
|
record__process_auxtrace);
|
|
|
|
if (ret < 0)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
if (ret)
|
|
|
|
rec->samples++;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2015-04-30 22:37:32 +08:00
|
|
|
static int record__auxtrace_mmap_read_snapshot(struct record *rec,
|
|
|
|
struct auxtrace_mmap *mm)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = auxtrace_mmap__read_snapshot(mm, rec->itr, &rec->tool,
|
|
|
|
record__process_auxtrace,
|
|
|
|
rec->opts.auxtrace_snapshot_size);
|
|
|
|
if (ret < 0)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
if (ret)
|
|
|
|
rec->samples++;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int record__auxtrace_read_snapshot_all(struct record *rec)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
int rc = 0;
|
|
|
|
|
|
|
|
for (i = 0; i < rec->evlist->nr_mmaps; i++) {
|
|
|
|
struct auxtrace_mmap *mm =
|
|
|
|
&rec->evlist->mmap[i].auxtrace_mmap;
|
|
|
|
|
|
|
|
if (!mm->base)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (record__auxtrace_mmap_read_snapshot(rec, mm) != 0) {
|
|
|
|
rc = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
out:
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void record__read_auxtrace_snapshot(struct record *rec)
|
|
|
|
{
|
|
|
|
pr_debug("Recording AUX area tracing snapshot\n");
|
|
|
|
if (record__auxtrace_read_snapshot_all(rec) < 0) {
|
2016-04-21 02:59:49 +08:00
|
|
|
trigger_error(&auxtrace_snapshot_trigger);
|
2015-04-30 22:37:32 +08:00
|
|
|
} else {
|
2016-04-21 02:59:49 +08:00
|
|
|
if (auxtrace_record__snapshot_finish(rec->itr))
|
|
|
|
trigger_error(&auxtrace_snapshot_trigger);
|
|
|
|
else
|
|
|
|
trigger_ready(&auxtrace_snapshot_trigger);
|
2015-04-30 22:37:32 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-04-30 22:37:27 +08:00
|
|
|
#else
|
|
|
|
|
|
|
|
static inline
|
|
|
|
int record__auxtrace_mmap_read(struct record *rec __maybe_unused,
|
|
|
|
struct auxtrace_mmap *mm __maybe_unused)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2015-04-30 22:37:32 +08:00
|
|
|
static inline
|
|
|
|
void record__read_auxtrace_snapshot(struct record *rec __maybe_unused)
|
2009-04-08 21:01:31 +08:00
|
|
|
{
|
2009-06-10 21:55:59 +08:00
|
|
|
}
|
|
|
|
|
2015-04-30 22:37:32 +08:00
|
|
|
static inline
|
|
|
|
int auxtrace_record__snapshot_start(struct auxtrace_record *itr __maybe_unused)
|
2009-06-10 21:55:59 +08:00
|
|
|
{
|
2015-04-30 22:37:32 +08:00
|
|
|
return 0;
|
2009-04-08 21:01:31 +08:00
|
|
|
}
|
|
|
|
|
2015-04-30 22:37:32 +08:00
|
|
|
#endif
|
|
|
|
|
2013-12-20 01:38:03 +08:00
|
|
|
static int record__open(struct record *rec)
|
2011-01-13 00:28:51 +08:00
|
|
|
{
|
2012-12-14 02:10:58 +08:00
|
|
|
char msg[512];
|
perf tools: Enable grouping logic for parsed events
This patch adds a functionality that allows to create event groups
based on the way they are specified on the command line. Adding
functionality to the '{}' group syntax introduced in earlier patch.
The current '--group/-g' option behaviour remains intact. If you
specify it for record/stat/top command, all the specified events
become members of a single group with the first event as a group
leader.
With the new '{}' group syntax you can create group like:
# perf record -e '{cycles,faults}' ls
resulting in single event group containing 'cycles' and 'faults'
events, with cycles event as group leader.
All groups are created with regards to threads and cpus. Thus
recording an event group within a 2 threads on server with
4 CPUs will create 8 separate groups.
Examples (first event in brackets is group leader):
# 1 group (cpu-clock,task-clock)
perf record --group -e cpu-clock,task-clock ls
perf record -e '{cpu-clock,task-clock}' ls
# 2 groups (cpu-clock,task-clock) (minor-faults,major-faults)
perf record -e '{cpu-clock,task-clock},{minor-faults,major-faults}' ls
# 1 group (cpu-clock,task-clock,minor-faults,major-faults)
perf record --group -e cpu-clock,task-clock -e minor-faults,major-faults ls
perf record -e '{cpu-clock,task-clock,minor-faults,major-faults}' ls
# 2 groups (cpu-clock,task-clock) (minor-faults,major-faults)
perf record -e '{cpu-clock,task-clock} -e '{minor-faults,major-faults}' \
-e instructions ls
# 1 group
# (cpu-clock,task-clock,minor-faults,major-faults,instructions)
perf record --group -e cpu-clock,task-clock \
-e minor-faults,major-faults -e instructions ls perf record -e
'{cpu-clock,task-clock,minor-faults,major-faults,instructions}' ls
It's possible to use standard event modifier for a group, which spans
over all events in the group and updates each event modifier settings,
for example:
# perf record -r '{faults:k,cache-references}:p'
resulting in ':kp' modifier being used for 'faults' and ':p' modifier
being used for 'cache-references' event.
Reviewed-by: Namhyung Kim <namhyung@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ulrich Drepper <drepper@gmail.com>
Link: http://lkml.kernel.org/n/tip-ho42u0wcr8mn1otkalqi13qp@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-08-08 18:22:36 +08:00
|
|
|
struct perf_evsel *pos;
|
2011-11-25 18:19:45 +08:00
|
|
|
struct perf_evlist *evlist = rec->evlist;
|
|
|
|
struct perf_session *session = rec->session;
|
2013-12-20 01:43:45 +08:00
|
|
|
struct record_opts *opts = &rec->opts;
|
2012-08-27 02:24:47 +08:00
|
|
|
int rc = 0;
|
2011-01-13 00:28:51 +08:00
|
|
|
|
2016-04-12 05:15:29 +08:00
|
|
|
perf_evlist__config(evlist, opts, &callchain_param);
|
2012-11-13 01:34:00 +08:00
|
|
|
|
2016-06-23 22:26:15 +08:00
|
|
|
evlist__for_each_entry(evlist, pos) {
|
2011-01-13 00:28:51 +08:00
|
|
|
try_again:
|
2015-08-21 14:23:14 +08:00
|
|
|
if (perf_evsel__open(pos, pos->cpus, pos->threads) < 0) {
|
2012-12-14 02:10:58 +08:00
|
|
|
if (perf_evsel__fallback(pos, errno, msg, sizeof(msg))) {
|
2010-03-18 22:36:05 +08:00
|
|
|
if (verbose)
|
2012-12-14 01:16:30 +08:00
|
|
|
ui__warning("%s\n", msg);
|
2010-03-18 22:36:05 +08:00
|
|
|
goto try_again;
|
|
|
|
}
|
2011-03-26 03:11:11 +08:00
|
|
|
|
2012-12-14 02:10:58 +08:00
|
|
|
rc = -errno;
|
|
|
|
perf_evsel__open_strerror(pos, &opts->target,
|
|
|
|
errno, msg, sizeof(msg));
|
|
|
|
ui__error("%s\n", msg);
|
2012-08-27 02:24:47 +08:00
|
|
|
goto out;
|
2009-10-15 11:22:07 +08:00
|
|
|
}
|
|
|
|
}
|
2010-12-25 22:12:25 +08:00
|
|
|
|
2015-03-25 06:23:47 +08:00
|
|
|
if (perf_evlist__apply_filters(evlist, &pos)) {
|
|
|
|
error("failed to set filter \"%s\" on event %s with %d (%s)\n",
|
|
|
|
pos->filter, perf_evsel__name(pos), errno,
|
2014-08-14 10:22:43 +08:00
|
|
|
strerror_r(errno, msg, sizeof(msg)));
|
2012-08-27 02:24:47 +08:00
|
|
|
rc = -1;
|
|
|
|
goto out;
|
2011-02-26 11:51:54 +08:00
|
|
|
}
|
|
|
|
|
2015-04-09 23:53:45 +08:00
|
|
|
if (perf_evlist__mmap_ex(evlist, opts->mmap_pages, false,
|
2015-04-30 22:37:32 +08:00
|
|
|
opts->auxtrace_mmap_pages,
|
|
|
|
opts->auxtrace_snapshot_mode) < 0) {
|
2012-08-27 02:24:47 +08:00
|
|
|
if (errno == EPERM) {
|
|
|
|
pr_err("Permission error mapping pages.\n"
|
|
|
|
"Consider increasing "
|
|
|
|
"/proc/sys/kernel/perf_event_mlock_kb,\n"
|
|
|
|
"or try again with a smaller value of -m/--mmap_pages.\n"
|
2015-04-09 23:53:45 +08:00
|
|
|
"(current value: %u,%u)\n",
|
|
|
|
opts->mmap_pages, opts->auxtrace_mmap_pages);
|
2012-08-27 02:24:47 +08:00
|
|
|
rc = -errno;
|
|
|
|
} else {
|
2014-08-14 10:22:43 +08:00
|
|
|
pr_err("failed to mmap with %d (%s)\n", errno,
|
|
|
|
strerror_r(errno, msg, sizeof(msg)));
|
2016-02-26 17:32:17 +08:00
|
|
|
if (errno)
|
|
|
|
rc = -errno;
|
|
|
|
else
|
|
|
|
rc = -EINVAL;
|
2012-08-27 02:24:47 +08:00
|
|
|
}
|
|
|
|
goto out;
|
2011-12-19 21:39:31 +08:00
|
|
|
}
|
2011-01-15 01:50:51 +08:00
|
|
|
|
2013-06-05 19:35:06 +08:00
|
|
|
session->evlist = evlist;
|
2012-08-02 06:31:00 +08:00
|
|
|
perf_session__set_id_hdr_size(session);
|
2012-08-27 02:24:47 +08:00
|
|
|
out:
|
|
|
|
return rc;
|
2009-05-05 23:50:27 +08:00
|
|
|
}
|
|
|
|
|
2015-01-29 16:06:44 +08:00
|
|
|
static int process_sample_event(struct perf_tool *tool,
|
|
|
|
union perf_event *event,
|
|
|
|
struct perf_sample *sample,
|
|
|
|
struct perf_evsel *evsel,
|
|
|
|
struct machine *machine)
|
|
|
|
{
|
|
|
|
struct record *rec = container_of(tool, struct record, tool);
|
|
|
|
|
|
|
|
rec->samples++;
|
|
|
|
|
|
|
|
return build_id__mark_dso_hit(tool, event, sample, evsel, machine);
|
|
|
|
}
|
|
|
|
|
2013-12-20 01:38:03 +08:00
|
|
|
static int process_buildids(struct record *rec)
|
2010-02-04 02:52:05 +08:00
|
|
|
{
|
2013-10-15 22:27:32 +08:00
|
|
|
struct perf_data_file *file = &rec->file;
|
|
|
|
struct perf_session *session = rec->session;
|
2010-02-04 02:52:05 +08:00
|
|
|
|
2015-05-28 21:17:30 +08:00
|
|
|
if (file->size == 0)
|
2010-03-12 02:53:11 +08:00
|
|
|
return 0;
|
|
|
|
|
2014-11-04 09:14:32 +08:00
|
|
|
/*
|
|
|
|
* During this process, it'll load kernel map and replace the
|
|
|
|
* dso->long_name to a real pathname it found. In this case
|
|
|
|
* we prefer the vmlinux path like
|
|
|
|
* /lib/modules/3.16.4/build/vmlinux
|
|
|
|
*
|
|
|
|
* rather than build-id path (in debug directory).
|
|
|
|
* $HOME/.debug/.build-id/f0/6e17aa50adf4d00b88925e03775de107611551
|
|
|
|
*/
|
|
|
|
symbol_conf.ignore_vmlinux_buildid = true;
|
|
|
|
|
2016-01-11 21:37:09 +08:00
|
|
|
/*
|
|
|
|
* If --buildid-all is given, it marks all DSO regardless of hits,
|
|
|
|
* so no need to process samples.
|
|
|
|
*/
|
|
|
|
if (rec->buildid_all)
|
|
|
|
rec->tool.sample = NULL;
|
|
|
|
|
2015-03-03 22:58:45 +08:00
|
|
|
return perf_session__process_events(session);
|
2010-02-04 02:52:05 +08:00
|
|
|
}
|
|
|
|
|
2011-01-30 00:01:45 +08:00
|
|
|
static void perf_event__synthesize_guest_os(struct machine *machine, void *data)
|
2010-04-19 13:32:50 +08:00
|
|
|
{
|
|
|
|
int err;
|
2011-11-28 18:30:20 +08:00
|
|
|
struct perf_tool *tool = data;
|
2010-04-19 13:32:50 +08:00
|
|
|
/*
|
|
|
|
*As for guest kernel when processing subcommand record&report,
|
|
|
|
*we arrange module mmap prior to guest kernel mmap and trigger
|
|
|
|
*a preload dso because default guest module symbols are loaded
|
|
|
|
*from guest kallsyms instead of /lib/modules/XXX/XXX. This
|
|
|
|
*method is used to avoid symbol missing when the first addr is
|
|
|
|
*in module instead of in guest kernel.
|
|
|
|
*/
|
2011-11-28 18:30:20 +08:00
|
|
|
err = perf_event__synthesize_modules(tool, process_synthesized_event,
|
2011-11-28 17:56:39 +08:00
|
|
|
machine);
|
2010-04-19 13:32:50 +08:00
|
|
|
if (err < 0)
|
|
|
|
pr_err("Couldn't record guest kernel [%d]'s reference"
|
2010-04-28 08:17:50 +08:00
|
|
|
" relocation symbol.\n", machine->pid);
|
2010-04-19 13:32:50 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* We use _stext for guest kernel because guest kernel's /proc/kallsyms
|
|
|
|
* have no _text sometimes.
|
|
|
|
*/
|
2011-11-28 18:30:20 +08:00
|
|
|
err = perf_event__synthesize_kernel_mmap(tool, process_synthesized_event,
|
2014-01-29 22:14:40 +08:00
|
|
|
machine);
|
2010-04-19 13:32:50 +08:00
|
|
|
if (err < 0)
|
|
|
|
pr_err("Couldn't record guest kernel [%d]'s reference"
|
2010-04-28 08:17:50 +08:00
|
|
|
" relocation symbol.\n", machine->pid);
|
2010-04-19 13:32:50 +08:00
|
|
|
}
|
|
|
|
|
2010-05-03 04:05:29 +08:00
|
|
|
static struct perf_event_header finished_round_event = {
|
|
|
|
.size = sizeof(struct perf_event_header),
|
|
|
|
.type = PERF_RECORD_FINISHED_ROUND,
|
|
|
|
};
|
|
|
|
|
2013-12-20 01:38:03 +08:00
|
|
|
static int record__mmap_read_all(struct record *rec)
|
2010-05-03 04:05:29 +08:00
|
|
|
{
|
2014-07-25 22:56:16 +08:00
|
|
|
u64 bytes_written = rec->bytes_written;
|
2010-05-20 20:45:26 +08:00
|
|
|
int i;
|
2012-08-27 02:24:47 +08:00
|
|
|
int rc = 0;
|
2010-05-03 04:05:29 +08:00
|
|
|
|
2011-11-25 18:19:45 +08:00
|
|
|
for (i = 0; i < rec->evlist->nr_mmaps; i++) {
|
2015-04-09 23:53:45 +08:00
|
|
|
struct auxtrace_mmap *mm = &rec->evlist->mmap[i].auxtrace_mmap;
|
|
|
|
|
2012-08-27 02:24:47 +08:00
|
|
|
if (rec->evlist->mmap[i].base) {
|
2014-09-18 03:42:58 +08:00
|
|
|
if (record__mmap_read(rec, i) != 0) {
|
2012-08-27 02:24:47 +08:00
|
|
|
rc = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
2015-04-09 23:53:45 +08:00
|
|
|
|
2015-04-30 22:37:32 +08:00
|
|
|
if (mm->base && !rec->opts.auxtrace_snapshot_mode &&
|
2015-04-09 23:53:45 +08:00
|
|
|
record__auxtrace_mmap_read(rec, mm) != 0) {
|
|
|
|
rc = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
2010-05-03 04:05:29 +08:00
|
|
|
}
|
|
|
|
|
2014-07-25 22:56:16 +08:00
|
|
|
/*
|
|
|
|
* Mark the round finished in case we wrote
|
|
|
|
* at least one event.
|
|
|
|
*/
|
|
|
|
if (bytes_written != rec->bytes_written)
|
|
|
|
rc = record__write(rec, &finished_round_event, sizeof(finished_round_event));
|
2012-08-27 02:24:47 +08:00
|
|
|
|
|
|
|
out:
|
|
|
|
return rc;
|
2010-05-03 04:05:29 +08:00
|
|
|
}
|
|
|
|
|
2013-12-20 01:38:03 +08:00
|
|
|
static void record__init_features(struct record *rec)
|
2013-11-07 02:41:34 +08:00
|
|
|
{
|
|
|
|
struct perf_session *session = rec->session;
|
|
|
|
int feat;
|
|
|
|
|
|
|
|
for (feat = HEADER_FIRST_FEATURE; feat < HEADER_LAST_FEATURE; feat++)
|
|
|
|
perf_header__set_feat(&session->header, feat);
|
|
|
|
|
|
|
|
if (rec->no_buildid)
|
|
|
|
perf_header__clear_feat(&session->header, HEADER_BUILD_ID);
|
|
|
|
|
2014-01-04 02:03:26 +08:00
|
|
|
if (!have_tracepoints(&rec->evlist->entries))
|
2013-11-07 02:41:34 +08:00
|
|
|
perf_header__clear_feat(&session->header, HEADER_TRACING_DATA);
|
|
|
|
|
|
|
|
if (!rec->opts.branch_stack)
|
|
|
|
perf_header__clear_feat(&session->header, HEADER_BRANCH_STACK);
|
2015-04-09 23:53:45 +08:00
|
|
|
|
|
|
|
if (!rec->opts.full_auxtrace)
|
|
|
|
perf_header__clear_feat(&session->header, HEADER_AUXTRACE);
|
2015-10-25 22:51:43 +08:00
|
|
|
|
|
|
|
perf_header__clear_feat(&session->header, HEADER_STAT);
|
2013-11-07 02:41:34 +08:00
|
|
|
}
|
|
|
|
|
2016-02-26 17:32:10 +08:00
|
|
|
static void
|
|
|
|
record__finish_output(struct record *rec)
|
|
|
|
{
|
|
|
|
struct perf_data_file *file = &rec->file;
|
|
|
|
int fd = perf_data_file__fd(file);
|
|
|
|
|
|
|
|
if (file->is_pipe)
|
|
|
|
return;
|
|
|
|
|
|
|
|
rec->session->header.data_size += rec->bytes_written;
|
|
|
|
file->size = lseek(perf_data_file__fd(file), 0, SEEK_CUR);
|
|
|
|
|
|
|
|
if (!rec->no_buildid) {
|
|
|
|
process_buildids(rec);
|
|
|
|
|
|
|
|
if (rec->buildid_all)
|
|
|
|
dsos__hit_all(rec->session);
|
|
|
|
}
|
|
|
|
perf_session__write_header(rec->session, rec->evlist, fd, true);
|
|
|
|
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2016-04-21 02:59:54 +08:00
|
|
|
static int record__synthesize_workload(struct record *rec)
|
|
|
|
{
|
|
|
|
struct {
|
|
|
|
struct thread_map map;
|
|
|
|
struct thread_map_data map_data;
|
|
|
|
} thread_map;
|
|
|
|
|
|
|
|
thread_map.map.nr = 1;
|
|
|
|
thread_map.map.map[0].pid = rec->evlist->workload.pid;
|
|
|
|
thread_map.map.map[0].comm = NULL;
|
|
|
|
return perf_event__synthesize_thread_map(&rec->tool, &thread_map.map,
|
|
|
|
process_synthesized_event,
|
|
|
|
&rec->session->machines.host,
|
|
|
|
rec->opts.sample_address,
|
|
|
|
rec->opts.proc_map_timeout);
|
|
|
|
}
|
|
|
|
|
2016-04-21 02:59:50 +08:00
|
|
|
static int record__synthesize(struct record *rec);
|
|
|
|
|
2016-04-13 16:21:07 +08:00
|
|
|
static int
|
|
|
|
record__switch_output(struct record *rec, bool at_exit)
|
|
|
|
{
|
|
|
|
struct perf_data_file *file = &rec->file;
|
|
|
|
int fd, err;
|
|
|
|
|
|
|
|
/* Same Size: "2015122520103046"*/
|
|
|
|
char timestamp[] = "InvalidTimestamp";
|
|
|
|
|
|
|
|
rec->samples = 0;
|
|
|
|
record__finish_output(rec);
|
|
|
|
err = fetch_current_timestamp(timestamp, sizeof(timestamp));
|
|
|
|
if (err) {
|
|
|
|
pr_err("Failed to get current timestamp\n");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
fd = perf_data_file__switch(file, timestamp,
|
|
|
|
rec->session->header.data_offset,
|
|
|
|
at_exit);
|
|
|
|
if (fd >= 0 && !at_exit) {
|
|
|
|
rec->bytes_written = 0;
|
|
|
|
rec->session->header.data_size = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!quiet)
|
|
|
|
fprintf(stderr, "[ perf record: Dump %s.%s ]\n",
|
|
|
|
file->path, timestamp);
|
2016-04-21 02:59:50 +08:00
|
|
|
|
|
|
|
/* Output tracking events */
|
2016-04-21 02:59:54 +08:00
|
|
|
if (!at_exit) {
|
2016-04-21 02:59:50 +08:00
|
|
|
record__synthesize(rec);
|
|
|
|
|
2016-04-21 02:59:54 +08:00
|
|
|
/*
|
|
|
|
* In 'perf record --switch-output' without -a,
|
|
|
|
* record__synthesize() in record__switch_output() won't
|
|
|
|
* generate tracking events because there's no thread_map
|
|
|
|
* in evlist. Which causes newly created perf.data doesn't
|
|
|
|
* contain map and comm information.
|
|
|
|
* Create a fake thread_map and directly call
|
|
|
|
* perf_event__synthesize_thread_map() for those events.
|
|
|
|
*/
|
|
|
|
if (target__none(&rec->opts.target))
|
|
|
|
record__synthesize_workload(rec);
|
|
|
|
}
|
2016-04-13 16:21:07 +08:00
|
|
|
return fd;
|
|
|
|
}
|
|
|
|
|
2014-01-03 02:11:25 +08:00
|
|
|
static volatile int workload_exec_errno;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* perf_evlist__prepare_workload will send a SIGUSR1
|
|
|
|
* if the fork fails, since we asked by setting its
|
|
|
|
* want_signal to true.
|
|
|
|
*/
|
2014-05-12 08:47:24 +08:00
|
|
|
static void workload_exec_failed_signal(int signo __maybe_unused,
|
|
|
|
siginfo_t *info,
|
2014-01-03 02:11:25 +08:00
|
|
|
void *ucontext __maybe_unused)
|
|
|
|
{
|
|
|
|
workload_exec_errno = info->si_value.sival_int;
|
|
|
|
done = 1;
|
|
|
|
child_finished = 1;
|
|
|
|
}
|
|
|
|
|
2015-04-30 22:37:32 +08:00
|
|
|
static void snapshot_sig_handler(int sig);
|
|
|
|
|
2016-03-08 16:38:44 +08:00
|
|
|
int __weak
|
|
|
|
perf_event__synth_time_conv(const struct perf_event_mmap_page *pc __maybe_unused,
|
|
|
|
struct perf_tool *tool __maybe_unused,
|
|
|
|
perf_event__handler_t process __maybe_unused,
|
|
|
|
struct machine *machine __maybe_unused)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-05-24 10:28:59 +08:00
|
|
|
static const struct perf_event_mmap_page *record__pick_pc(struct record *rec)
|
|
|
|
{
|
|
|
|
if (rec->evlist && rec->evlist->mmap && rec->evlist->mmap[0].base)
|
|
|
|
return rec->evlist->mmap[0].base;
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2016-02-26 17:32:07 +08:00
|
|
|
static int record__synthesize(struct record *rec)
|
|
|
|
{
|
|
|
|
struct perf_session *session = rec->session;
|
|
|
|
struct machine *machine = &session->machines.host;
|
|
|
|
struct perf_data_file *file = &rec->file;
|
|
|
|
struct record_opts *opts = &rec->opts;
|
|
|
|
struct perf_tool *tool = &rec->tool;
|
|
|
|
int fd = perf_data_file__fd(file);
|
|
|
|
int err = 0;
|
|
|
|
|
|
|
|
if (file->is_pipe) {
|
|
|
|
err = perf_event__synthesize_attrs(tool, session,
|
|
|
|
process_synthesized_event);
|
|
|
|
if (err < 0) {
|
|
|
|
pr_err("Couldn't synthesize attrs.\n");
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (have_tracepoints(&rec->evlist->entries)) {
|
|
|
|
/*
|
|
|
|
* FIXME err <= 0 here actually means that
|
|
|
|
* there were no tracepoints so its not really
|
|
|
|
* an error, just that we don't need to
|
|
|
|
* synthesize anything. We really have to
|
|
|
|
* return this more properly and also
|
|
|
|
* propagate errors that now are calling die()
|
|
|
|
*/
|
|
|
|
err = perf_event__synthesize_tracing_data(tool, fd, rec->evlist,
|
|
|
|
process_synthesized_event);
|
|
|
|
if (err <= 0) {
|
|
|
|
pr_err("Couldn't record tracing data.\n");
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
rec->bytes_written += err;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-05-24 10:28:59 +08:00
|
|
|
err = perf_event__synth_time_conv(record__pick_pc(rec), tool,
|
2016-03-08 16:38:44 +08:00
|
|
|
process_synthesized_event, machine);
|
|
|
|
if (err)
|
|
|
|
goto out;
|
|
|
|
|
2016-02-26 17:32:07 +08:00
|
|
|
if (rec->opts.full_auxtrace) {
|
|
|
|
err = perf_event__synthesize_auxtrace_info(rec->itr, tool,
|
|
|
|
session, process_synthesized_event);
|
|
|
|
if (err)
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
err = perf_event__synthesize_kernel_mmap(tool, process_synthesized_event,
|
|
|
|
machine);
|
|
|
|
WARN_ONCE(err < 0, "Couldn't record kernel reference relocation symbol\n"
|
|
|
|
"Symbol resolution may be skewed if relocation was used (e.g. kexec).\n"
|
|
|
|
"Check /proc/kallsyms permission or run as root.\n");
|
|
|
|
|
|
|
|
err = perf_event__synthesize_modules(tool, process_synthesized_event,
|
|
|
|
machine);
|
|
|
|
WARN_ONCE(err < 0, "Couldn't record kernel module information.\n"
|
|
|
|
"Symbol resolution may be skewed if relocation was used (e.g. kexec).\n"
|
|
|
|
"Check /proc/modules permission or run as root.\n");
|
|
|
|
|
|
|
|
if (perf_guest) {
|
|
|
|
machines__process_guests(&session->machines,
|
|
|
|
perf_event__synthesize_guest_os, tool);
|
|
|
|
}
|
|
|
|
|
|
|
|
err = __machine__synthesize_threads(machine, tool, &opts->target, rec->evlist->threads,
|
|
|
|
process_synthesized_event, opts->sample_address,
|
|
|
|
opts->proc_map_timeout);
|
|
|
|
out:
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2013-12-20 01:38:03 +08:00
|
|
|
static int __cmd_record(struct record *rec, int argc, const char **argv)
|
2009-05-05 23:50:27 +08:00
|
|
|
{
|
2013-11-07 02:41:34 +08:00
|
|
|
int err;
|
2014-05-12 08:47:24 +08:00
|
|
|
int status = 0;
|
2009-09-18 01:59:05 +08:00
|
|
|
unsigned long waking = 0;
|
2010-03-18 22:36:04 +08:00
|
|
|
const bool forks = argc > 0;
|
2010-04-28 08:17:50 +08:00
|
|
|
struct machine *machine;
|
2011-11-28 18:30:20 +08:00
|
|
|
struct perf_tool *tool = &rec->tool;
|
2013-12-20 01:43:45 +08:00
|
|
|
struct record_opts *opts = &rec->opts;
|
2013-10-15 22:27:32 +08:00
|
|
|
struct perf_data_file *file = &rec->file;
|
2011-11-25 18:19:45 +08:00
|
|
|
struct perf_session *session;
|
2014-08-13 22:33:59 +08:00
|
|
|
bool disabled = false, draining = false;
|
2015-01-29 16:06:48 +08:00
|
|
|
int fd;
|
2009-04-08 21:01:31 +08:00
|
|
|
|
2011-11-25 18:19:45 +08:00
|
|
|
rec->progname = argv[0];
|
2011-09-16 05:31:40 +08:00
|
|
|
|
2014-05-12 08:47:24 +08:00
|
|
|
atexit(record__sig_exit);
|
2009-06-19 05:22:55 +08:00
|
|
|
signal(SIGCHLD, sig_handler);
|
|
|
|
signal(SIGINT, sig_handler);
|
2013-05-07 02:24:23 +08:00
|
|
|
signal(SIGTERM, sig_handler);
|
2016-04-13 16:21:06 +08:00
|
|
|
|
2016-04-21 02:59:50 +08:00
|
|
|
if (rec->opts.auxtrace_snapshot_mode || rec->switch_output) {
|
2015-04-30 22:37:32 +08:00
|
|
|
signal(SIGUSR2, snapshot_sig_handler);
|
2016-04-21 02:59:50 +08:00
|
|
|
if (rec->opts.auxtrace_snapshot_mode)
|
|
|
|
trigger_on(&auxtrace_snapshot_trigger);
|
|
|
|
if (rec->switch_output)
|
|
|
|
trigger_on(&switch_output_trigger);
|
2016-04-13 16:21:06 +08:00
|
|
|
} else {
|
2015-04-30 22:37:32 +08:00
|
|
|
signal(SIGUSR2, SIG_IGN);
|
2016-04-13 16:21:06 +08:00
|
|
|
}
|
2009-06-19 05:22:55 +08:00
|
|
|
|
2015-03-03 22:58:45 +08:00
|
|
|
session = perf_session__new(file, false, tool);
|
2009-12-12 07:24:02 +08:00
|
|
|
if (session == NULL) {
|
2014-04-18 10:00:43 +08:00
|
|
|
pr_err("Perf session creation failed.\n");
|
2009-11-17 11:18:11 +08:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2015-01-29 16:06:48 +08:00
|
|
|
fd = perf_data_file__fd(file);
|
2011-11-25 18:19:45 +08:00
|
|
|
rec->session = session;
|
|
|
|
|
2013-12-20 01:38:03 +08:00
|
|
|
record__init_features(rec);
|
2012-03-09 06:47:46 +08:00
|
|
|
|
2009-12-28 07:36:57 +08:00
|
|
|
if (forks) {
|
2014-01-04 02:03:26 +08:00
|
|
|
err = perf_evlist__prepare_workload(rec->evlist, &opts->target,
|
2013-10-15 22:27:32 +08:00
|
|
|
argv, file->is_pipe,
|
2014-01-04 01:56:49 +08:00
|
|
|
workload_exec_failed_signal);
|
2011-11-09 18:47:15 +08:00
|
|
|
if (err < 0) {
|
|
|
|
pr_err("Couldn't run the workload!\n");
|
2014-05-12 08:47:24 +08:00
|
|
|
status = err;
|
2011-11-09 18:47:15 +08:00
|
|
|
goto out_delete_session;
|
2009-12-17 00:55:55 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-12-20 01:38:03 +08:00
|
|
|
if (record__open(rec) != 0) {
|
2012-08-27 02:24:47 +08:00
|
|
|
err = -1;
|
2014-05-12 08:47:24 +08:00
|
|
|
goto out_child;
|
2012-08-27 02:24:47 +08:00
|
|
|
}
|
2009-04-08 21:01:31 +08:00
|
|
|
|
perf record: Apply config to BPF objects before recording
bpf__apply_obj_config() is introduced as the core API to apply object
config options to all BPF objects. This patch also does the real work
for setting values for BPF_MAP_TYPE_PERF_ARRAY maps by inserting value
stored in map's private field into the BPF map.
This patch is required because we are not always able to set all BPF
config during parsing. Further patch will set events created by perf to
BPF_MAP_TYPE_PERF_EVENT_ARRAY maps, which is not exist until
perf_evsel__open().
bpf_map_foreach_key() is introduced to iterate over each key needs to be
configured. This function would be extended to support more map types
and different key settings.
In perf record, before start recording, call bpf__apply_config() to turn
on all BPF config options.
Test result:
# cat ./test_bpf_map_1.c
/************************ BEGIN **************************/
#include <uapi/linux/bpf.h>
#define SEC(NAME) __attribute__((section(NAME), used))
struct bpf_map_def {
unsigned int type;
unsigned int key_size;
unsigned int value_size;
unsigned int max_entries;
};
static void *(*map_lookup_elem)(struct bpf_map_def *, void *) =
(void *)BPF_FUNC_map_lookup_elem;
static int (*trace_printk)(const char *fmt, int fmt_size, ...) =
(void *)BPF_FUNC_trace_printk;
struct bpf_map_def SEC("maps") channel = {
.type = BPF_MAP_TYPE_ARRAY,
.key_size = sizeof(int),
.value_size = sizeof(int),
.max_entries = 1,
};
SEC("func=sys_nanosleep")
int func(void *ctx)
{
int key = 0;
char fmt[] = "%d\n";
int *pval = map_lookup_elem(&channel, &key);
if (!pval)
return 0;
trace_printk(fmt, sizeof(fmt), *pval);
return 0;
}
char _license[] SEC("license") = "GPL";
int _version SEC("version") = LINUX_VERSION_CODE;
/************************* END ***************************/
# echo "" > /sys/kernel/debug/tracing/trace
# ./perf record -e './test_bpf_map_1.c/map:channel.value=11/' usleep 10
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.012 MB perf.data ]
# cat /sys/kernel/debug/tracing/trace
# tracer: nop
#
# entries-in-buffer/entries-written: 1/1 #P:8
[SNIP]
# TASK-PID CPU# |||| TIMESTAMP FUNCTION
# | | | |||| | |
usleep-18593 [007] d... 2394714.395539: : 11
# ./perf record -e './test_bpf_map_1.c/map:channel.value=101/' usleep 10
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.012 MB perf.data ]
# cat /sys/kernel/debug/tracing/trace
# tracer: nop
#
# entries-in-buffer/entries-written: 1/1 #P:8
[SNIP]
# TASK-PID CPU# |||| TIMESTAMP FUNCTION
# | | | |||| | |
usleep-18593 [007] d... 2394714.395539: : 11
usleep-19000 [006] d... 2394831.057840: : 101
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Cody P Schafer <dev@codyps.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jeremie Galarneau <jeremie.galarneau@efficios.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kirill Smelkov <kirr@nexedi.com>
Cc: Li Zefan <lizefan@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1456132275-98875-6-git-send-email-wangnan0@huawei.com
Signed-off-by: He Kuang <hekuang@huawei.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-02-22 17:10:32 +08:00
|
|
|
err = bpf__apply_obj_config();
|
|
|
|
if (err) {
|
|
|
|
char errbuf[BUFSIZ];
|
|
|
|
|
|
|
|
bpf__strerror_apply_obj_config(err, errbuf, sizeof(errbuf));
|
|
|
|
pr_err("ERROR: Apply config to BPF failed: %s\n",
|
|
|
|
errbuf);
|
|
|
|
goto out_child;
|
|
|
|
}
|
|
|
|
|
2015-08-19 22:29:21 +08:00
|
|
|
/*
|
|
|
|
* Normally perf_session__new would do this, but it doesn't have the
|
|
|
|
* evlist.
|
|
|
|
*/
|
|
|
|
if (rec->tool.ordered_events && !perf_evlist__sample_id_all(rec->evlist)) {
|
|
|
|
pr_warning("WARNING: No sample_id_all support, falling back to unordered processing\n");
|
|
|
|
rec->tool.ordered_events = false;
|
|
|
|
}
|
|
|
|
|
2014-01-04 02:03:26 +08:00
|
|
|
if (!rec->evlist->nr_groups)
|
2013-01-22 17:09:31 +08:00
|
|
|
perf_header__clear_feat(&session->header, HEADER_GROUP_DESC);
|
|
|
|
|
2013-10-15 22:27:32 +08:00
|
|
|
if (file->is_pipe) {
|
2015-01-29 16:06:48 +08:00
|
|
|
err = perf_header__write_pipe(fd);
|
2010-04-02 12:59:16 +08:00
|
|
|
if (err < 0)
|
2014-05-12 08:47:24 +08:00
|
|
|
goto out_child;
|
2013-06-05 19:35:06 +08:00
|
|
|
} else {
|
2015-01-29 16:06:48 +08:00
|
|
|
err = perf_session__write_header(session, rec->evlist, fd, false);
|
2009-11-20 00:55:56 +08:00
|
|
|
if (err < 0)
|
2014-05-12 08:47:24 +08:00
|
|
|
goto out_child;
|
2010-01-06 02:50:31 +08:00
|
|
|
}
|
|
|
|
|
2012-02-07 06:27:52 +08:00
|
|
|
if (!rec->no_buildid
|
2011-12-07 17:02:55 +08:00
|
|
|
&& !perf_header__has_feat(&session->header, HEADER_BUILD_ID)) {
|
2012-02-07 06:27:52 +08:00
|
|
|
pr_err("Couldn't generate buildids. "
|
2011-12-07 17:02:55 +08:00
|
|
|
"Use --no-buildid to profile anyway.\n");
|
2012-08-27 02:24:47 +08:00
|
|
|
err = -1;
|
2014-05-12 08:47:24 +08:00
|
|
|
goto out_child;
|
2011-12-07 17:02:55 +08:00
|
|
|
}
|
|
|
|
|
2012-12-19 20:04:24 +08:00
|
|
|
machine = &session->machines.host;
|
2011-11-28 17:56:39 +08:00
|
|
|
|
2016-02-26 17:32:07 +08:00
|
|
|
err = record__synthesize(rec);
|
|
|
|
if (err < 0)
|
2014-05-12 08:47:24 +08:00
|
|
|
goto out_child;
|
2012-08-27 02:24:47 +08:00
|
|
|
|
2011-11-25 18:19:45 +08:00
|
|
|
if (rec->realtime_prio) {
|
2009-04-08 21:01:31 +08:00
|
|
|
struct sched_param param;
|
|
|
|
|
2011-11-25 18:19:45 +08:00
|
|
|
param.sched_priority = rec->realtime_prio;
|
2009-04-08 21:01:31 +08:00
|
|
|
if (sched_setscheduler(0, SCHED_FIFO, ¶m)) {
|
2009-10-22 03:34:06 +08:00
|
|
|
pr_err("Could not set realtime priority.\n");
|
2012-08-27 02:24:47 +08:00
|
|
|
err = -1;
|
2014-05-12 08:47:24 +08:00
|
|
|
goto out_child;
|
2009-04-08 21:01:31 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-11-13 01:34:01 +08:00
|
|
|
/*
|
|
|
|
* When perf is starting the traced process, all the events
|
|
|
|
* (apart from group members) have enable_on_exec=1 set,
|
|
|
|
* so don't spoil it by prematurely enabling them.
|
|
|
|
*/
|
2014-01-12 05:38:27 +08:00
|
|
|
if (!target__none(&opts->target) && !opts->initial_delay)
|
2014-01-04 02:03:26 +08:00
|
|
|
perf_evlist__enable(rec->evlist);
|
2011-08-26 00:17:55 +08:00
|
|
|
|
2009-12-17 00:55:55 +08:00
|
|
|
/*
|
|
|
|
* Let the child rip
|
|
|
|
*/
|
2015-09-22 08:24:55 +08:00
|
|
|
if (forks) {
|
2015-09-30 09:45:24 +08:00
|
|
|
union perf_event *event;
|
|
|
|
|
|
|
|
event = malloc(sizeof(event->comm) + machine->id_hdr_size);
|
|
|
|
if (event == NULL) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto out_child;
|
|
|
|
}
|
|
|
|
|
2015-09-22 08:24:55 +08:00
|
|
|
/*
|
|
|
|
* Some H/W events are generated before COMM event
|
|
|
|
* which is emitted during exec(), so perf script
|
|
|
|
* cannot see a correct process name for those events.
|
|
|
|
* Synthesize COMM event to prevent it.
|
|
|
|
*/
|
2015-09-30 09:45:24 +08:00
|
|
|
perf_event__synthesize_comm(tool, event,
|
2015-09-22 08:24:55 +08:00
|
|
|
rec->evlist->workload.pid,
|
|
|
|
process_synthesized_event,
|
|
|
|
machine);
|
2015-09-30 09:45:24 +08:00
|
|
|
free(event);
|
2015-09-22 08:24:55 +08:00
|
|
|
|
2014-01-04 02:03:26 +08:00
|
|
|
perf_evlist__start_workload(rec->evlist);
|
2015-09-22 08:24:55 +08:00
|
|
|
}
|
2009-12-17 00:55:55 +08:00
|
|
|
|
2014-01-12 05:38:27 +08:00
|
|
|
if (opts->initial_delay) {
|
|
|
|
usleep(opts->initial_delay * 1000);
|
|
|
|
perf_evlist__enable(rec->evlist);
|
|
|
|
}
|
|
|
|
|
2016-04-21 02:59:49 +08:00
|
|
|
trigger_ready(&auxtrace_snapshot_trigger);
|
2016-04-21 02:59:50 +08:00
|
|
|
trigger_ready(&switch_output_trigger);
|
2009-06-25 03:12:48 +08:00
|
|
|
for (;;) {
|
perf record: Change 'record.samples' type to unsigned long long
When run "perf record -e", the number of samples showed up is wrong on some
32 bit systems, i.e. powerpc and arm.
For example, run the below commands on 32 bit powerpc:
perf probe -x /lib/libc.so.6 malloc
perf record -e probe_libc:malloc -a ls perf.data
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.036 MB perf.data (13829241621624967218 samples) ]
Actually, "perf script" just shows 21 samples. The number of samples is also
absurd since samples is long type, but it is printed as PRIu64.
Build test ran on x86-64, x86, aarch64, arm, mips, ppc and ppc64.
Signed-off-by: Yang Shi <yang.shi@linaro.org>
Cc: linaro-kernel@lists.linaro.org
Link: http://lkml.kernel.org/r/1443563383-4064-1-git-send-email-yang.shi@linaro.org
[ Bumped the 'hits' var used together with record.samples to 'unsigned long long' too ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-09-30 05:49:43 +08:00
|
|
|
unsigned long long hits = rec->samples;
|
2009-04-08 21:01:31 +08:00
|
|
|
|
2013-12-20 01:38:03 +08:00
|
|
|
if (record__mmap_read_all(rec) < 0) {
|
2016-04-21 02:59:49 +08:00
|
|
|
trigger_error(&auxtrace_snapshot_trigger);
|
2016-04-21 02:59:50 +08:00
|
|
|
trigger_error(&switch_output_trigger);
|
2012-08-27 02:24:47 +08:00
|
|
|
err = -1;
|
2014-05-12 08:47:24 +08:00
|
|
|
goto out_child;
|
2012-08-27 02:24:47 +08:00
|
|
|
}
|
2009-04-08 21:01:31 +08:00
|
|
|
|
2015-04-30 22:37:32 +08:00
|
|
|
if (auxtrace_record__snapshot_started) {
|
|
|
|
auxtrace_record__snapshot_started = 0;
|
2016-04-21 02:59:49 +08:00
|
|
|
if (!trigger_is_error(&auxtrace_snapshot_trigger))
|
2015-04-30 22:37:32 +08:00
|
|
|
record__read_auxtrace_snapshot(rec);
|
2016-04-21 02:59:49 +08:00
|
|
|
if (trigger_is_error(&auxtrace_snapshot_trigger)) {
|
2015-04-30 22:37:32 +08:00
|
|
|
pr_err("AUX area tracing snapshot failed\n");
|
|
|
|
err = -1;
|
|
|
|
goto out_child;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-04-21 02:59:50 +08:00
|
|
|
if (trigger_is_hit(&switch_output_trigger)) {
|
|
|
|
trigger_ready(&switch_output_trigger);
|
|
|
|
|
|
|
|
if (!quiet)
|
|
|
|
fprintf(stderr, "[ perf record: dump data: Woken up %ld times ]\n",
|
|
|
|
waking);
|
|
|
|
waking = 0;
|
|
|
|
fd = record__switch_output(rec, false);
|
|
|
|
if (fd < 0) {
|
|
|
|
pr_err("Failed to switch to new file\n");
|
|
|
|
trigger_error(&switch_output_trigger);
|
|
|
|
err = fd;
|
|
|
|
goto out_child;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-11-25 18:19:45 +08:00
|
|
|
if (hits == rec->samples) {
|
2014-08-13 22:33:59 +08:00
|
|
|
if (done || draining)
|
2009-06-25 03:12:48 +08:00
|
|
|
break;
|
2014-08-19 04:25:59 +08:00
|
|
|
err = perf_evlist__poll(rec->evlist, -1);
|
2014-06-03 01:44:23 +08:00
|
|
|
/*
|
|
|
|
* Propagate error, only if there's any. Ignore positive
|
|
|
|
* number of returned events and interrupt error.
|
|
|
|
*/
|
|
|
|
if (err > 0 || (err < 0 && errno == EINTR))
|
2014-05-12 08:47:24 +08:00
|
|
|
err = 0;
|
2009-09-18 01:59:05 +08:00
|
|
|
waking++;
|
2014-08-13 22:33:59 +08:00
|
|
|
|
|
|
|
if (perf_evlist__filter_pollfd(rec->evlist, POLLERR | POLLHUP) == 0)
|
|
|
|
draining = true;
|
2009-09-18 01:59:05 +08:00
|
|
|
}
|
|
|
|
|
2012-11-13 01:34:01 +08:00
|
|
|
/*
|
|
|
|
* When perf is starting the traced process, at the end events
|
|
|
|
* die with the process and we wait for that. Thus no need to
|
|
|
|
* disable events in this case.
|
|
|
|
*/
|
2013-11-13 03:46:16 +08:00
|
|
|
if (done && !disabled && !target__none(&opts->target)) {
|
2016-04-21 02:59:49 +08:00
|
|
|
trigger_off(&auxtrace_snapshot_trigger);
|
2014-01-04 02:03:26 +08:00
|
|
|
perf_evlist__disable(rec->evlist);
|
2012-11-13 01:34:02 +08:00
|
|
|
disabled = true;
|
|
|
|
}
|
2009-04-08 21:01:31 +08:00
|
|
|
}
|
2016-04-21 02:59:49 +08:00
|
|
|
trigger_off(&auxtrace_snapshot_trigger);
|
2016-04-21 02:59:50 +08:00
|
|
|
trigger_off(&switch_output_trigger);
|
2009-04-08 21:01:31 +08:00
|
|
|
|
2014-01-03 02:11:25 +08:00
|
|
|
if (forks && workload_exec_errno) {
|
2014-08-14 10:22:43 +08:00
|
|
|
char msg[STRERR_BUFSIZE];
|
2014-01-03 02:11:25 +08:00
|
|
|
const char *emsg = strerror_r(workload_exec_errno, msg, sizeof(msg));
|
|
|
|
pr_err("Workload failed: %s\n", emsg);
|
|
|
|
err = -1;
|
2014-05-12 08:47:24 +08:00
|
|
|
goto out_child;
|
2014-01-03 02:11:25 +08:00
|
|
|
}
|
|
|
|
|
2015-01-29 16:06:44 +08:00
|
|
|
if (!quiet)
|
2014-05-12 08:47:24 +08:00
|
|
|
fprintf(stderr, "[ perf record: Woken up %ld times to write data ]\n", waking);
|
2010-10-27 01:20:09 +08:00
|
|
|
|
2014-05-12 08:47:24 +08:00
|
|
|
out_child:
|
|
|
|
if (forks) {
|
|
|
|
int exit_status;
|
2009-06-03 05:43:11 +08:00
|
|
|
|
2014-05-12 08:47:24 +08:00
|
|
|
if (!child_finished)
|
|
|
|
kill(rec->evlist->workload.pid, SIGTERM);
|
|
|
|
|
|
|
|
wait(&exit_status);
|
|
|
|
|
|
|
|
if (err < 0)
|
|
|
|
status = err;
|
|
|
|
else if (WIFEXITED(exit_status))
|
|
|
|
status = WEXITSTATUS(exit_status);
|
|
|
|
else if (WIFSIGNALED(exit_status))
|
|
|
|
signr = WTERMSIG(exit_status);
|
|
|
|
} else
|
|
|
|
status = err;
|
|
|
|
|
2015-01-29 16:06:44 +08:00
|
|
|
/* this will be recalculated during process_buildids() */
|
|
|
|
rec->samples = 0;
|
|
|
|
|
2016-04-13 16:21:07 +08:00
|
|
|
if (!err) {
|
|
|
|
if (!rec->timestamp_filename) {
|
|
|
|
record__finish_output(rec);
|
|
|
|
} else {
|
|
|
|
fd = record__switch_output(rec, true);
|
|
|
|
if (fd < 0) {
|
|
|
|
status = fd;
|
|
|
|
goto out_delete_session;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2010-07-30 01:08:55 +08:00
|
|
|
|
2015-01-29 16:06:44 +08:00
|
|
|
if (!err && !quiet) {
|
|
|
|
char samples[128];
|
2016-04-13 16:21:07 +08:00
|
|
|
const char *postfix = rec->timestamp_filename ?
|
|
|
|
".<timestamp>" : "";
|
2015-01-29 16:06:44 +08:00
|
|
|
|
2015-04-09 23:53:45 +08:00
|
|
|
if (rec->samples && !rec->opts.full_auxtrace)
|
2015-01-29 16:06:44 +08:00
|
|
|
scnprintf(samples, sizeof(samples),
|
|
|
|
" (%" PRIu64 " samples)", rec->samples);
|
|
|
|
else
|
|
|
|
samples[0] = '\0';
|
|
|
|
|
2016-04-13 16:21:07 +08:00
|
|
|
fprintf(stderr, "[ perf record: Captured and wrote %.3f MB %s%s%s ]\n",
|
2015-01-29 16:06:44 +08:00
|
|
|
perf_data_file__size(file) / 1024.0 / 1024.0,
|
2016-04-13 16:21:07 +08:00
|
|
|
file->path, postfix, samples);
|
2015-01-29 16:06:44 +08:00
|
|
|
}
|
|
|
|
|
2010-07-30 01:08:55 +08:00
|
|
|
out_delete_session:
|
|
|
|
perf_session__delete(session);
|
2014-05-12 08:47:24 +08:00
|
|
|
return status;
|
2009-04-08 21:01:31 +08:00
|
|
|
}
|
2009-05-26 15:17:18 +08:00
|
|
|
|
2016-04-16 03:37:17 +08:00
|
|
|
static void callchain_debug(struct callchain_param *callchain)
|
2013-10-26 22:25:33 +08:00
|
|
|
{
|
2015-01-06 02:23:04 +08:00
|
|
|
static const char *str[CALLCHAIN_MAX] = { "NONE", "FP", "DWARF", "LBR" };
|
2014-02-03 19:44:43 +08:00
|
|
|
|
2016-04-16 03:37:17 +08:00
|
|
|
pr_debug("callchain: type %s\n", str[callchain->record_mode]);
|
2012-08-07 21:20:47 +08:00
|
|
|
|
2016-04-16 03:37:17 +08:00
|
|
|
if (callchain->record_mode == CALLCHAIN_DWARF)
|
2013-10-26 22:25:33 +08:00
|
|
|
pr_debug("callchain: stack dump size %d\n",
|
2016-04-16 03:37:17 +08:00
|
|
|
callchain->dump_size);
|
2013-10-26 22:25:33 +08:00
|
|
|
}
|
|
|
|
|
2016-04-16 03:37:17 +08:00
|
|
|
int record_opts__parse_callchain(struct record_opts *record,
|
|
|
|
struct callchain_param *callchain,
|
|
|
|
const char *arg, bool unset)
|
2013-10-26 22:25:33 +08:00
|
|
|
{
|
|
|
|
int ret;
|
2016-04-16 03:37:17 +08:00
|
|
|
callchain->enabled = !unset;
|
2014-02-03 19:44:42 +08:00
|
|
|
|
2013-10-26 22:25:33 +08:00
|
|
|
/* --no-call-graph */
|
|
|
|
if (unset) {
|
2016-04-16 03:37:17 +08:00
|
|
|
callchain->record_mode = CALLCHAIN_NONE;
|
2013-10-26 22:25:33 +08:00
|
|
|
pr_debug("callchain: disabled\n");
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-04-16 03:37:17 +08:00
|
|
|
ret = parse_callchain_record_opt(arg, callchain);
|
2016-01-07 21:30:22 +08:00
|
|
|
if (!ret) {
|
|
|
|
/* Enable data address sampling for DWARF unwind. */
|
2016-04-16 03:37:17 +08:00
|
|
|
if (callchain->record_mode == CALLCHAIN_DWARF)
|
2016-01-07 21:30:22 +08:00
|
|
|
record->sample_address = true;
|
2016-04-16 03:37:17 +08:00
|
|
|
callchain_debug(callchain);
|
2016-01-07 21:30:22 +08:00
|
|
|
}
|
2012-08-07 21:20:47 +08:00
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2016-04-16 03:37:17 +08:00
|
|
|
int record_parse_callchain_opt(const struct option *opt,
|
|
|
|
const char *arg,
|
|
|
|
int unset)
|
|
|
|
{
|
|
|
|
return record_opts__parse_callchain(opt->value, &callchain_param, arg, unset);
|
|
|
|
}
|
|
|
|
|
2015-07-29 17:42:12 +08:00
|
|
|
int record_callchain_opt(const struct option *opt,
|
2013-10-26 22:25:33 +08:00
|
|
|
const char *arg __maybe_unused,
|
|
|
|
int unset __maybe_unused)
|
|
|
|
{
|
2016-04-18 23:09:08 +08:00
|
|
|
struct callchain_param *callchain = opt->value;
|
2015-07-29 17:42:12 +08:00
|
|
|
|
2016-04-18 23:09:08 +08:00
|
|
|
callchain->enabled = true;
|
2013-10-26 22:25:33 +08:00
|
|
|
|
2016-04-18 23:09:08 +08:00
|
|
|
if (callchain->record_mode == CALLCHAIN_NONE)
|
|
|
|
callchain->record_mode = CALLCHAIN_FP;
|
2014-02-03 19:44:42 +08:00
|
|
|
|
2016-04-18 23:09:08 +08:00
|
|
|
callchain_debug(callchain);
|
2013-10-26 22:25:33 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-02-03 19:44:42 +08:00
|
|
|
static int perf_record_config(const char *var, const char *value, void *cb)
|
|
|
|
{
|
2015-12-15 09:49:56 +08:00
|
|
|
struct record *rec = cb;
|
|
|
|
|
|
|
|
if (!strcmp(var, "record.build-id")) {
|
|
|
|
if (!strcmp(value, "cache"))
|
|
|
|
rec->no_buildid_cache = false;
|
|
|
|
else if (!strcmp(value, "no-cache"))
|
|
|
|
rec->no_buildid_cache = true;
|
|
|
|
else if (!strcmp(value, "skip"))
|
|
|
|
rec->no_buildid = true;
|
|
|
|
else
|
|
|
|
return -1;
|
|
|
|
return 0;
|
|
|
|
}
|
2014-02-03 19:44:42 +08:00
|
|
|
if (!strcmp(var, "record.call-graph"))
|
2014-09-23 09:01:44 +08:00
|
|
|
var = "call-graph.record-mode"; /* fall-through */
|
2014-02-03 19:44:42 +08:00
|
|
|
|
|
|
|
return perf_default_config(var, value, cb);
|
|
|
|
}
|
|
|
|
|
2015-03-31 06:19:31 +08:00
|
|
|
struct clockid_map {
|
|
|
|
const char *name;
|
|
|
|
int clockid;
|
|
|
|
};
|
|
|
|
|
|
|
|
#define CLOCKID_MAP(n, c) \
|
|
|
|
{ .name = n, .clockid = (c), }
|
|
|
|
|
|
|
|
#define CLOCKID_END { .name = NULL, }
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Add the missing ones, we need to build on many distros...
|
|
|
|
*/
|
|
|
|
#ifndef CLOCK_MONOTONIC_RAW
|
|
|
|
#define CLOCK_MONOTONIC_RAW 4
|
|
|
|
#endif
|
|
|
|
#ifndef CLOCK_BOOTTIME
|
|
|
|
#define CLOCK_BOOTTIME 7
|
|
|
|
#endif
|
|
|
|
#ifndef CLOCK_TAI
|
|
|
|
#define CLOCK_TAI 11
|
|
|
|
#endif
|
|
|
|
|
|
|
|
static const struct clockid_map clockids[] = {
|
|
|
|
/* available for all events, NMI safe */
|
|
|
|
CLOCKID_MAP("monotonic", CLOCK_MONOTONIC),
|
|
|
|
CLOCKID_MAP("monotonic_raw", CLOCK_MONOTONIC_RAW),
|
|
|
|
|
|
|
|
/* available for some events */
|
|
|
|
CLOCKID_MAP("realtime", CLOCK_REALTIME),
|
|
|
|
CLOCKID_MAP("boottime", CLOCK_BOOTTIME),
|
|
|
|
CLOCKID_MAP("tai", CLOCK_TAI),
|
|
|
|
|
|
|
|
/* available for the lazy */
|
|
|
|
CLOCKID_MAP("mono", CLOCK_MONOTONIC),
|
|
|
|
CLOCKID_MAP("raw", CLOCK_MONOTONIC_RAW),
|
|
|
|
CLOCKID_MAP("real", CLOCK_REALTIME),
|
|
|
|
CLOCKID_MAP("boot", CLOCK_BOOTTIME),
|
|
|
|
|
|
|
|
CLOCKID_END,
|
|
|
|
};
|
|
|
|
|
|
|
|
static int parse_clockid(const struct option *opt, const char *str, int unset)
|
|
|
|
{
|
|
|
|
struct record_opts *opts = (struct record_opts *)opt->value;
|
|
|
|
const struct clockid_map *cm;
|
|
|
|
const char *ostr = str;
|
|
|
|
|
|
|
|
if (unset) {
|
|
|
|
opts->use_clockid = 0;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* no arg passed */
|
|
|
|
if (!str)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
/* no setting it twice */
|
|
|
|
if (opts->use_clockid)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
opts->use_clockid = true;
|
|
|
|
|
|
|
|
/* if its a number, we're done */
|
|
|
|
if (sscanf(str, "%d", &opts->clockid) == 1)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
/* allow a "CLOCK_" prefix to the name */
|
|
|
|
if (!strncasecmp(str, "CLOCK_", 6))
|
|
|
|
str += 6;
|
|
|
|
|
|
|
|
for (cm = clockids; cm->name; cm++) {
|
|
|
|
if (!strcasecmp(str, cm->name)) {
|
|
|
|
opts->clockid = cm->clockid;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
opts->use_clockid = false;
|
|
|
|
ui__warning("unknown clockid %s, check man page\n", ostr);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2015-04-09 23:53:46 +08:00
|
|
|
static int record__parse_mmap_pages(const struct option *opt,
|
|
|
|
const char *str,
|
|
|
|
int unset __maybe_unused)
|
|
|
|
{
|
|
|
|
struct record_opts *opts = opt->value;
|
|
|
|
char *s, *p;
|
|
|
|
unsigned int mmap_pages;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (!str)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
s = strdup(str);
|
|
|
|
if (!s)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
p = strchr(s, ',');
|
|
|
|
if (p)
|
|
|
|
*p = '\0';
|
|
|
|
|
|
|
|
if (*s) {
|
|
|
|
ret = __perf_evlist__parse_mmap_pages(&mmap_pages, s);
|
|
|
|
if (ret)
|
|
|
|
goto out_free;
|
|
|
|
opts->mmap_pages = mmap_pages;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!p) {
|
|
|
|
ret = 0;
|
|
|
|
goto out_free;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = __perf_evlist__parse_mmap_pages(&mmap_pages, p + 1);
|
|
|
|
if (ret)
|
|
|
|
goto out_free;
|
|
|
|
|
|
|
|
opts->auxtrace_mmap_pages = mmap_pages;
|
|
|
|
|
|
|
|
out_free:
|
|
|
|
free(s);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2014-10-22 23:15:46 +08:00
|
|
|
static const char * const __record_usage[] = {
|
2009-05-28 22:25:34 +08:00
|
|
|
"perf record [<options>] [<command>]",
|
|
|
|
"perf record [<options>] -- <command> [<options>]",
|
2009-05-26 15:17:18 +08:00
|
|
|
NULL
|
|
|
|
};
|
2014-10-22 23:15:46 +08:00
|
|
|
const char * const *record_usage = __record_usage;
|
2009-05-26 15:17:18 +08:00
|
|
|
|
2011-11-25 18:19:45 +08:00
|
|
|
/*
|
2013-12-20 01:38:03 +08:00
|
|
|
* XXX Ideally would be local to cmd_record() and passed to a record__new
|
|
|
|
* because we need to have access to it in record__exit, that is called
|
2011-11-25 18:19:45 +08:00
|
|
|
* after cmd_record() exits, but since record_options need to be accessible to
|
|
|
|
* builtin-script, leave it here.
|
|
|
|
*
|
|
|
|
* At least we don't ouch it in all the other functions here directly.
|
|
|
|
*
|
|
|
|
* Just say no to tons of global variables, sigh.
|
|
|
|
*/
|
2013-12-20 01:38:03 +08:00
|
|
|
static struct record record = {
|
2011-11-25 18:19:45 +08:00
|
|
|
.opts = {
|
2014-07-31 14:45:04 +08:00
|
|
|
.sample_time = true,
|
2011-11-25 18:19:45 +08:00
|
|
|
.mmap_pages = UINT_MAX,
|
|
|
|
.user_freq = UINT_MAX,
|
|
|
|
.user_interval = ULLONG_MAX,
|
2012-05-23 00:14:18 +08:00
|
|
|
.freq = 4000,
|
2012-05-16 17:45:49 +08:00
|
|
|
.target = {
|
|
|
|
.uses_mmap = true,
|
2013-11-15 21:52:29 +08:00
|
|
|
.default_per_cpu = true,
|
2012-05-16 17:45:49 +08:00
|
|
|
},
|
2015-06-17 21:51:11 +08:00
|
|
|
.proc_map_timeout = 500,
|
2011-11-25 18:19:45 +08:00
|
|
|
},
|
2015-01-29 16:06:44 +08:00
|
|
|
.tool = {
|
|
|
|
.sample = process_sample_event,
|
|
|
|
.fork = perf_event__process_fork,
|
2015-08-19 22:29:21 +08:00
|
|
|
.exit = perf_event__process_exit,
|
2015-01-29 16:06:44 +08:00
|
|
|
.comm = perf_event__process_comm,
|
|
|
|
.mmap = perf_event__process_mmap,
|
|
|
|
.mmap2 = perf_event__process_mmap2,
|
2015-08-19 22:29:21 +08:00
|
|
|
.ordered_events = true,
|
2015-01-29 16:06:44 +08:00
|
|
|
},
|
2011-11-25 18:19:45 +08:00
|
|
|
};
|
2010-04-15 01:42:07 +08:00
|
|
|
|
perf tools: Improve call graph documents and help messages
The --call-graph option is complex so we should provide better guide for
users. Also change help message to be consistent with config option
names. Now perf top will show help like below:
$ perf top --call-graph
Error: option `call-graph' requires a value
Usage: perf top [<options>]
--call-graph <record_mode[,record_size],print_type,threshold[,print_limit],order,sort_key[,branch]>
setup and enables call-graph (stack chain/backtrace):
record_mode: call graph recording mode (fp|dwarf|lbr)
record_size: if record_mode is 'dwarf', max size of stack recording (<bytes>)
default: 8192 (bytes)
print_type: call graph printing style (graph|flat|fractal|none)
threshold: minimum call graph inclusion threshold (<percent>)
print_limit: maximum number of call graph entry (<number>)
order: call graph order (caller|callee)
sort_key: call graph sort key (function|address)
branch: include last branch info to call graph (branch)
Default: fp,graph,0.5,caller,function
Requested-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Chandler Carruth <chandlerc@gmail.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1445524112-5201-2-git-send-email-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-10-22 22:28:32 +08:00
|
|
|
const char record_callchain_help[] = CALLCHAIN_RECORD_HELP
|
|
|
|
"\n\t\t\t\tDefault: fp";
|
2012-10-02 02:20:58 +08:00
|
|
|
|
2016-06-16 16:02:41 +08:00
|
|
|
static bool dry_run;
|
|
|
|
|
2011-11-25 18:19:45 +08:00
|
|
|
/*
|
|
|
|
* XXX Will stay a global variable till we fix builtin-script.c to stop messing
|
|
|
|
* with it and switch to use the library functions in perf_evlist that came
|
2013-12-20 01:43:45 +08:00
|
|
|
* from builtin-record.c, i.e. use record_opts,
|
2011-11-25 18:19:45 +08:00
|
|
|
* perf_evlist__prepare_workload, etc instead of fork+exec'in 'perf record',
|
|
|
|
* using pipes, etc.
|
|
|
|
*/
|
2014-10-22 23:15:46 +08:00
|
|
|
struct option __record_options[] = {
|
2011-11-25 18:19:45 +08:00
|
|
|
OPT_CALLBACK('e', "event", &record.evlist, "event",
|
2009-06-06 18:24:17 +08:00
|
|
|
"event selector. use 'perf list' to list available events",
|
2011-07-14 17:25:32 +08:00
|
|
|
parse_events_option),
|
2011-11-25 18:19:45 +08:00
|
|
|
OPT_CALLBACK(0, "filter", &record.evlist, "filter",
|
2009-10-15 11:22:07 +08:00
|
|
|
"event filter", parse_filter),
|
2015-07-10 15:36:10 +08:00
|
|
|
OPT_CALLBACK_NOOPT(0, "exclude-perf", &record.evlist,
|
|
|
|
NULL, "don't record events from perf itself",
|
|
|
|
exclude_perf),
|
2012-04-26 13:15:15 +08:00
|
|
|
OPT_STRING('p', "pid", &record.opts.target.pid, "pid",
|
2010-03-18 22:36:05 +08:00
|
|
|
"record events on existing process id"),
|
2012-04-26 13:15:15 +08:00
|
|
|
OPT_STRING('t', "tid", &record.opts.target.tid, "tid",
|
2010-03-18 22:36:05 +08:00
|
|
|
"record events on existing thread id"),
|
2011-11-25 18:19:45 +08:00
|
|
|
OPT_INTEGER('r', "realtime", &record.realtime_prio,
|
2009-05-26 15:17:18 +08:00
|
|
|
"collect data with this RT SCHED_FIFO priority"),
|
2014-01-15 04:52:14 +08:00
|
|
|
OPT_BOOLEAN(0, "no-buffering", &record.opts.no_buffering,
|
perf record: Add "nodelay" mode, disabled by default
Sometimes there is a need to use perf in "live-log" mode. The problem
is, for seldom events, actual info output is largely delayed because
perf-record reads sample data in whole pages.
So for such scenarious, add flag for perf-record to go in "nodelay"
mode. To track e.g. what's going on in icmp_rcv while ping is running
Use it with something like this:
(1) $ perf probe -L icmp_rcv | grep -U8 '^ *43\>'
goto error;
}
38 if (!pskb_pull(skb, sizeof(*icmph)))
goto error;
icmph = icmp_hdr(skb);
43 ICMPMSGIN_INC_STATS_BH(net, icmph->type);
/*
* 18 is the highest 'known' ICMP type. Anything else is a mystery
*
* RFC 1122: 3.2.2 Unknown ICMP messages types MUST be silently
* discarded.
*/
50 if (icmph->type > NR_ICMP_TYPES)
goto error;
$ perf probe icmp_rcv:43 'type=icmph->type'
(2) $ cat trace-icmp.py
[...]
def trace_begin():
print "in trace_begin"
def trace_end():
print "in trace_end"
def probe__icmp_rcv(event_name, context, common_cpu,
common_secs, common_nsecs, common_pid, common_comm,
__probe_ip, type):
print_header(event_name, common_cpu, common_secs, common_nsecs,
common_pid, common_comm)
print "__probe_ip=%u, type=%u\n" % \
(__probe_ip, type),
[...]
(3) $ perf record -a -D -e probe:icmp_rcv -o - | \
perf script -i - -s trace-icmp.py
Thanks to Peter Zijlstra for pointing how to do it.
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>, Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Tom Zanussi <tzanussi@gmail.com>
LKML-Reference: <20110112140613.GA11698@tugrik.mns.mnsspb.ru>
Signed-off-by: Kirill Smelkov <kirr@mns.spb.ru>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2011-01-12 22:59:36 +08:00
|
|
|
"collect data without buffering"),
|
2011-11-25 18:19:45 +08:00
|
|
|
OPT_BOOLEAN('R', "raw-samples", &record.opts.raw_samples,
|
2009-08-13 16:27:19 +08:00
|
|
|
"collect raw sample records from all opened counters"),
|
2012-04-26 13:15:15 +08:00
|
|
|
OPT_BOOLEAN('a', "all-cpus", &record.opts.target.system_wide,
|
2009-05-26 15:17:18 +08:00
|
|
|
"system-wide collection from all CPUs"),
|
2012-04-26 13:15:15 +08:00
|
|
|
OPT_STRING('C', "cpu", &record.opts.target.cpu_list, "cpu",
|
2010-05-28 18:00:01 +08:00
|
|
|
"list of cpus to monitor"),
|
2011-11-25 18:19:45 +08:00
|
|
|
OPT_U64('c', "count", &record.opts.user_interval, "event period to sample"),
|
2013-10-15 22:27:32 +08:00
|
|
|
OPT_STRING('o', "output", &record.file.path, "file",
|
2009-06-03 04:59:57 +08:00
|
|
|
"output file name"),
|
2013-11-18 17:55:57 +08:00
|
|
|
OPT_BOOLEAN_SET('i', "no-inherit", &record.opts.no_inherit,
|
|
|
|
&record.opts.no_inherit_set,
|
|
|
|
"child tasks do not inherit counters"),
|
2011-11-25 18:19:45 +08:00
|
|
|
OPT_UINTEGER('F', "freq", &record.opts.user_freq, "profile at this frequency"),
|
2015-04-09 23:53:46 +08:00
|
|
|
OPT_CALLBACK('m', "mmap-pages", &record.opts, "pages[,pages]",
|
|
|
|
"number of mmap data pages and AUX area tracing mmap pages",
|
|
|
|
record__parse_mmap_pages),
|
2011-11-25 18:19:45 +08:00
|
|
|
OPT_BOOLEAN(0, "group", &record.opts.group,
|
2011-08-17 18:42:07 +08:00
|
|
|
"put the counters into a counter group"),
|
2016-04-18 23:09:08 +08:00
|
|
|
OPT_CALLBACK_NOOPT('g', NULL, &callchain_param,
|
2013-10-26 22:25:33 +08:00
|
|
|
NULL, "enables call-graph recording" ,
|
|
|
|
&record_callchain_opt),
|
|
|
|
OPT_CALLBACK(0, "call-graph", &record.opts,
|
perf tools: Improve call graph documents and help messages
The --call-graph option is complex so we should provide better guide for
users. Also change help message to be consistent with config option
names. Now perf top will show help like below:
$ perf top --call-graph
Error: option `call-graph' requires a value
Usage: perf top [<options>]
--call-graph <record_mode[,record_size],print_type,threshold[,print_limit],order,sort_key[,branch]>
setup and enables call-graph (stack chain/backtrace):
record_mode: call graph recording mode (fp|dwarf|lbr)
record_size: if record_mode is 'dwarf', max size of stack recording (<bytes>)
default: 8192 (bytes)
print_type: call graph printing style (graph|flat|fractal|none)
threshold: minimum call graph inclusion threshold (<percent>)
print_limit: maximum number of call graph entry (<number>)
order: call graph order (caller|callee)
sort_key: call graph sort key (function|address)
branch: include last branch info to call graph (branch)
Default: fp,graph,0.5,caller,function
Requested-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Chandler Carruth <chandlerc@gmail.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1445524112-5201-2-git-send-email-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-10-22 22:28:32 +08:00
|
|
|
"record_mode[,record_size]", record_callchain_help,
|
2013-10-26 22:25:33 +08:00
|
|
|
&record_parse_callchain_opt),
|
2010-04-13 16:37:33 +08:00
|
|
|
OPT_INCR('v', "verbose", &verbose,
|
2009-06-07 23:39:02 +08:00
|
|
|
"be more verbose (show counter open errors, etc)"),
|
2010-10-27 01:20:09 +08:00
|
|
|
OPT_BOOLEAN('q', "quiet", &quiet, "don't print any message"),
|
2011-11-25 18:19:45 +08:00
|
|
|
OPT_BOOLEAN('s', "stat", &record.opts.inherit_stat,
|
2009-06-25 03:12:48 +08:00
|
|
|
"per thread counts"),
|
2015-06-10 22:48:50 +08:00
|
|
|
OPT_BOOLEAN('d', "data", &record.opts.sample_address, "Record the sample addresses"),
|
2015-07-06 19:51:01 +08:00
|
|
|
OPT_BOOLEAN_SET('T', "timestamp", &record.opts.sample_time,
|
|
|
|
&record.opts.sample_time_set,
|
|
|
|
"Record the sample timestamps"),
|
2015-06-10 22:48:50 +08:00
|
|
|
OPT_BOOLEAN('P', "period", &record.opts.period, "Record the sample period"),
|
2011-11-25 18:19:45 +08:00
|
|
|
OPT_BOOLEAN('n', "no-samples", &record.opts.no_samples,
|
2009-06-25 03:12:48 +08:00
|
|
|
"don't sample"),
|
2016-01-25 17:56:19 +08:00
|
|
|
OPT_BOOLEAN_SET('N', "no-buildid-cache", &record.no_buildid_cache,
|
|
|
|
&record.no_buildid_cache_set,
|
|
|
|
"do not update the buildid cache"),
|
|
|
|
OPT_BOOLEAN_SET('B', "no-buildid", &record.no_buildid,
|
|
|
|
&record.no_buildid_set,
|
|
|
|
"do not collect buildids in perf.data"),
|
2011-11-25 18:19:45 +08:00
|
|
|
OPT_CALLBACK('G', "cgroup", &record.evlist, "name",
|
perf tool: Add cgroup support
This patch adds the ability to filter monitoring based on container groups
(cgroups) for both perf stat and perf record. It is possible to monitor
multiple cgroup in parallel. There is one cgroup per event. The cgroups to
monitor are passed via a new -G option followed by a comma separated list of
cgroup names.
The cgroup filesystem has to be mounted. Given a cgroup name, the perf tool
finds the corresponding directory in the cgroup filesystem and opens it. It
then passes that file descriptor to the kernel.
Example:
$ perf stat -B -a -e cycles:u,cycles:u,cycles:u -G test1,,test2 -- sleep 1
Performance counter stats for 'sleep 1':
2,368,667,414 cycles test1
2,369,661,459 cycles
<not counted> cycles test2
1.001856890 seconds time elapsed
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <4d590290.825bdf0a.7d0a.4890@mx.google.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-02-14 17:20:01 +08:00
|
|
|
"monitor event in cgroup name only",
|
|
|
|
parse_cgroups),
|
2014-01-15 04:58:12 +08:00
|
|
|
OPT_UINTEGER('D', "delay", &record.opts.initial_delay,
|
2014-01-12 05:38:27 +08:00
|
|
|
"ms to wait before starting measurement after program start"),
|
2012-04-26 13:15:15 +08:00
|
|
|
OPT_STRING('u', "uid", &record.opts.target.uid_str, "user",
|
|
|
|
"user to profile"),
|
2012-03-09 06:47:45 +08:00
|
|
|
|
|
|
|
OPT_CALLBACK_NOOPT('b', "branch-any", &record.opts.branch_stack,
|
|
|
|
"branch any", "sample any taken branches",
|
|
|
|
parse_branch_stack),
|
|
|
|
|
|
|
|
OPT_CALLBACK('j', "branch-filter", &record.opts.branch_stack,
|
|
|
|
"branch filter mask", "branch stack filter modes",
|
2012-02-10 06:21:02 +08:00
|
|
|
parse_branch_stack),
|
2013-01-24 23:10:29 +08:00
|
|
|
OPT_BOOLEAN('W', "weight", &record.opts.sample_weight,
|
|
|
|
"sample by weight (on special events only)"),
|
2013-09-20 22:40:43 +08:00
|
|
|
OPT_BOOLEAN(0, "transaction", &record.opts.sample_transaction,
|
|
|
|
"sample transaction flags (special events only)"),
|
2013-11-15 21:52:29 +08:00
|
|
|
OPT_BOOLEAN(0, "per-thread", &record.opts.target.per_thread,
|
|
|
|
"use per-thread mmaps"),
|
perf record: Add ability to name registers to record
This patch modifies the -I/--int-regs option to enablepassing the name
of the registers to sample on interrupt. Registers can be specified by
their symbolic names. For instance on x86, --intr-regs=ax,si.
The motivation is to reduce the size of the perf.data file and the
overhead of sampling by only collecting the registers useful to a
specific analysis. For instance, for value profiling, sampling only the
registers used to passed arguements to functions.
With no parameter, the --intr-regs still records all possible registers
based on the architecture.
To name registers, it is necessary to use the long form of the option,
i.e., --intr-regs:
$ perf record --intr-regs=si,di,r8,r9 .....
To record any possible registers:
$ perf record -I .....
$ perf report --intr-regs ...
To display the register, one can use perf report -D
To list the available registers:
$ perf record --intr-regs=\?
available registers: AX BX CX DX SI DI BP SP IP FLAGS CS SS R8 R9 R10 R11 R12 R13 R14 R15
Signed-off-by: Stephane Eranian <eranian@google.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1441039273-16260-4-git-send-email-eranian@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-09-01 00:41:12 +08:00
|
|
|
OPT_CALLBACK_OPTARG('I', "intr-regs", &record.opts.sample_intr_regs, NULL, "any register",
|
|
|
|
"sample selected machine registers on interrupt,"
|
|
|
|
" use -I ? to list register names", parse_regs),
|
2015-02-25 07:13:40 +08:00
|
|
|
OPT_BOOLEAN(0, "running-time", &record.opts.running_time,
|
|
|
|
"Record running/enabled time of read (:S) events"),
|
2015-03-31 06:19:31 +08:00
|
|
|
OPT_CALLBACK('k', "clockid", &record.opts,
|
|
|
|
"clockid", "clockid to use for events, see clock_gettime()",
|
|
|
|
parse_clockid),
|
2015-04-30 22:37:32 +08:00
|
|
|
OPT_STRING_OPTARG('S', "snapshot", &record.opts.auxtrace_snapshot_opts,
|
|
|
|
"opts", "AUX area tracing Snapshot Mode", ""),
|
2015-06-17 21:51:11 +08:00
|
|
|
OPT_UINTEGER(0, "proc-map-timeout", &record.opts.proc_map_timeout,
|
|
|
|
"per thread proc mmap processing timeout in ms"),
|
2015-07-21 17:44:04 +08:00
|
|
|
OPT_BOOLEAN(0, "switch-events", &record.opts.record_switch_events,
|
|
|
|
"Record context switch events"),
|
2016-02-15 16:34:31 +08:00
|
|
|
OPT_BOOLEAN_FLAG(0, "all-kernel", &record.opts.all_kernel,
|
|
|
|
"Configure all used events to run in kernel space.",
|
|
|
|
PARSE_OPT_EXCLUSIVE),
|
|
|
|
OPT_BOOLEAN_FLAG(0, "all-user", &record.opts.all_user,
|
|
|
|
"Configure all used events to run in user space.",
|
|
|
|
PARSE_OPT_EXCLUSIVE),
|
2015-10-14 20:41:19 +08:00
|
|
|
OPT_STRING(0, "clang-path", &llvm_param.clang_path, "clang path",
|
|
|
|
"clang binary to use for compiling BPF scriptlets"),
|
|
|
|
OPT_STRING(0, "clang-opt", &llvm_param.clang_opt, "clang options",
|
|
|
|
"options passed to clang when compiling BPF scriptlets"),
|
2015-12-14 18:39:23 +08:00
|
|
|
OPT_STRING(0, "vmlinux", &symbol_conf.vmlinux_name,
|
|
|
|
"file", "vmlinux pathname"),
|
2016-01-11 21:37:09 +08:00
|
|
|
OPT_BOOLEAN(0, "buildid-all", &record.buildid_all,
|
|
|
|
"Record build-id of all DSOs regardless of hits"),
|
2016-04-13 16:21:07 +08:00
|
|
|
OPT_BOOLEAN(0, "timestamp-filename", &record.timestamp_filename,
|
|
|
|
"append timestamp to output filename"),
|
2016-04-21 02:59:50 +08:00
|
|
|
OPT_BOOLEAN(0, "switch-output", &record.switch_output,
|
|
|
|
"Switch output when receive SIGUSR2"),
|
2016-06-16 16:02:41 +08:00
|
|
|
OPT_BOOLEAN(0, "dry-run", &dry_run,
|
|
|
|
"Parse options then exit"),
|
2009-05-26 15:17:18 +08:00
|
|
|
OPT_END()
|
|
|
|
};
|
|
|
|
|
2014-10-22 23:15:46 +08:00
|
|
|
struct option *record_options = __record_options;
|
|
|
|
|
2012-09-11 06:15:03 +08:00
|
|
|
int cmd_record(int argc, const char **argv, const char *prefix __maybe_unused)
|
2009-05-26 15:17:18 +08:00
|
|
|
{
|
2015-04-09 23:53:45 +08:00
|
|
|
int err;
|
2013-12-20 01:38:03 +08:00
|
|
|
struct record *rec = &record;
|
2012-05-07 13:09:02 +08:00
|
|
|
char errbuf[BUFSIZ];
|
2009-05-26 15:17:18 +08:00
|
|
|
|
perf tools: Make options always available, even if required libs not linked
This patch keeps options of perf builtins same in all conditions. If
one option is disabled because of compiling options, users should be
notified.
Masami suggested another implementation in [1] that, by adding a
OPTION_NEXT_DEPENDS option before those options in the 'struct option'
array, options parser knows an option is disabled. However, in some
cases this array is reordered (options__order()). In addition, in
parse-option.c that array is const, so we can't simply merge
information in decorator option into the affacted option.
This patch chooses a simpler implementation that, introducing a
set_option_nobuild() function and two option parsing flags. Builtins
with such options should call set_option_nobuild() before option
parsing. The complexity of this patch is because we want some of options
can be skipped safely. In this case their arguments should also be
consumed.
Options in 'perf record' and 'perf probe' are fixed in this patch.
[1] http://lkml.kernel.org/g/50399556C9727B4D88A595C8584AAB3752627CD4@GSjpTKYDCembx32.service.hitachi.net
Test result:
Normal case:
# ./perf probe --vmlinux /tmp/vmlinux sys_write
Added new event:
probe:sys_write (on sys_write)
You can now use it in all perf tools, such as:
perf record -e probe:sys_write -aR sleep 1
Build with NO_DWARF=1:
# ./perf probe -L sys_write
Error: switch `L' is not available because NO_DWARF=1
Usage: perf probe [<options>] 'PROBEDEF' ['PROBEDEF' ...]
or: perf probe [<options>] --add 'PROBEDEF' [--add 'PROBEDEF' ...]
or: perf probe [<options>] --del '[GROUP:]EVENT' ...
or: perf probe --list [GROUP:]EVENT ...
or: perf probe [<options>] --funcs
-L, --line <FUNC[:RLN[+NUM|-RLN2]]|SRC:ALN[+NUM|-ALN2]>
Show source code lines.
(not built-in because NO_DWARF=1)
# ./perf probe -k /tmp/vmlinux sys_write
Warning: switch `k' is being ignored because NO_DWARF=1
Added new event:
probe:sys_write (on sys_write)
You can now use it in all perf tools, such as:
perf record -e probe:sys_write -aR sleep 1
# ./perf probe --vmlinux /tmp/vmlinux sys_write
Warning: option `vmlinux' is being ignored because NO_DWARF=1
Added new event:
[SNIP]
# ./perf probe -l
Usage: perf probe [<options>] 'PROBEDEF' ['PROBEDEF' ...]
or: perf probe [<options>] --add 'PROBEDEF' [--add 'PROBEDEF' ...]
...
-k, --vmlinux <file> vmlinux pathname
(not built-in because NO_DWARF=1)
-L, --line <FUNC[:RLN[+NUM|-RLN2]]|SRC:ALN[+NUM|-ALN2]>
Show source code lines.
(not built-in because NO_DWARF=1)
...
-V, --vars <FUNC[@SRC][+OFF|%return|:RL|;PT]|SRC:AL|SRC;PT>
Show accessible variables on PROBEDEF
(not built-in because NO_DWARF=1)
--externs Show external variables too (with --vars only)
(not built-in because NO_DWARF=1)
--no-inlines Don't search inlined functions
(not built-in because NO_DWARF=1)
--range Show variables location range in scope (with --vars only)
(not built-in because NO_DWARF=1)
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1450089563-122430-14-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-12-14 18:39:22 +08:00
|
|
|
#ifndef HAVE_LIBBPF_SUPPORT
|
|
|
|
# define set_nobuild(s, l, c) set_option_nobuild(record_options, s, l, "NO_LIBBPF=1", c)
|
|
|
|
set_nobuild('\0', "clang-path", true);
|
|
|
|
set_nobuild('\0', "clang-opt", true);
|
|
|
|
# undef set_nobuild
|
2015-12-14 18:39:23 +08:00
|
|
|
#endif
|
|
|
|
|
|
|
|
#ifndef HAVE_BPF_PROLOGUE
|
|
|
|
# if !defined (HAVE_DWARF_SUPPORT)
|
|
|
|
# define REASON "NO_DWARF=1"
|
|
|
|
# elif !defined (HAVE_LIBBPF_SUPPORT)
|
|
|
|
# define REASON "NO_LIBBPF=1"
|
|
|
|
# else
|
|
|
|
# define REASON "this architecture doesn't support BPF prologue"
|
|
|
|
# endif
|
|
|
|
# define set_nobuild(s, l, c) set_option_nobuild(record_options, s, l, REASON, c)
|
|
|
|
set_nobuild('\0', "vmlinux", true);
|
|
|
|
# undef set_nobuild
|
|
|
|
# undef REASON
|
perf tools: Make options always available, even if required libs not linked
This patch keeps options of perf builtins same in all conditions. If
one option is disabled because of compiling options, users should be
notified.
Masami suggested another implementation in [1] that, by adding a
OPTION_NEXT_DEPENDS option before those options in the 'struct option'
array, options parser knows an option is disabled. However, in some
cases this array is reordered (options__order()). In addition, in
parse-option.c that array is const, so we can't simply merge
information in decorator option into the affacted option.
This patch chooses a simpler implementation that, introducing a
set_option_nobuild() function and two option parsing flags. Builtins
with such options should call set_option_nobuild() before option
parsing. The complexity of this patch is because we want some of options
can be skipped safely. In this case their arguments should also be
consumed.
Options in 'perf record' and 'perf probe' are fixed in this patch.
[1] http://lkml.kernel.org/g/50399556C9727B4D88A595C8584AAB3752627CD4@GSjpTKYDCembx32.service.hitachi.net
Test result:
Normal case:
# ./perf probe --vmlinux /tmp/vmlinux sys_write
Added new event:
probe:sys_write (on sys_write)
You can now use it in all perf tools, such as:
perf record -e probe:sys_write -aR sleep 1
Build with NO_DWARF=1:
# ./perf probe -L sys_write
Error: switch `L' is not available because NO_DWARF=1
Usage: perf probe [<options>] 'PROBEDEF' ['PROBEDEF' ...]
or: perf probe [<options>] --add 'PROBEDEF' [--add 'PROBEDEF' ...]
or: perf probe [<options>] --del '[GROUP:]EVENT' ...
or: perf probe --list [GROUP:]EVENT ...
or: perf probe [<options>] --funcs
-L, --line <FUNC[:RLN[+NUM|-RLN2]]|SRC:ALN[+NUM|-ALN2]>
Show source code lines.
(not built-in because NO_DWARF=1)
# ./perf probe -k /tmp/vmlinux sys_write
Warning: switch `k' is being ignored because NO_DWARF=1
Added new event:
probe:sys_write (on sys_write)
You can now use it in all perf tools, such as:
perf record -e probe:sys_write -aR sleep 1
# ./perf probe --vmlinux /tmp/vmlinux sys_write
Warning: option `vmlinux' is being ignored because NO_DWARF=1
Added new event:
[SNIP]
# ./perf probe -l
Usage: perf probe [<options>] 'PROBEDEF' ['PROBEDEF' ...]
or: perf probe [<options>] --add 'PROBEDEF' [--add 'PROBEDEF' ...]
...
-k, --vmlinux <file> vmlinux pathname
(not built-in because NO_DWARF=1)
-L, --line <FUNC[:RLN[+NUM|-RLN2]]|SRC:ALN[+NUM|-ALN2]>
Show source code lines.
(not built-in because NO_DWARF=1)
...
-V, --vars <FUNC[@SRC][+OFF|%return|:RL|;PT]|SRC:AL|SRC;PT>
Show accessible variables on PROBEDEF
(not built-in because NO_DWARF=1)
--externs Show external variables too (with --vars only)
(not built-in because NO_DWARF=1)
--no-inlines Don't search inlined functions
(not built-in because NO_DWARF=1)
--range Show variables location range in scope (with --vars only)
(not built-in because NO_DWARF=1)
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1450089563-122430-14-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-12-14 18:39:22 +08:00
|
|
|
#endif
|
|
|
|
|
2014-01-04 02:03:26 +08:00
|
|
|
rec->evlist = perf_evlist__new();
|
|
|
|
if (rec->evlist == NULL)
|
2011-01-12 06:56:53 +08:00
|
|
|
return -ENOMEM;
|
|
|
|
|
2014-02-03 19:44:42 +08:00
|
|
|
perf_config(perf_record_config, rec);
|
|
|
|
|
2010-11-10 22:11:30 +08:00
|
|
|
argc = parse_options(argc, argv, record_options, record_usage,
|
2009-12-16 06:04:40 +08:00
|
|
|
PARSE_OPT_STOP_AT_NON_OPTION);
|
2013-11-13 03:46:16 +08:00
|
|
|
if (!argc && target__none(&rec->opts.target))
|
2010-11-10 22:11:30 +08:00
|
|
|
usage_with_options(record_usage, record_options);
|
2009-05-26 15:17:18 +08:00
|
|
|
|
2012-04-26 13:15:15 +08:00
|
|
|
if (nr_cgroups && !rec->opts.target.system_wide) {
|
2015-10-24 23:49:27 +08:00
|
|
|
usage_with_options_msg(record_usage, record_options,
|
|
|
|
"cgroup monitoring only available in system-wide mode");
|
|
|
|
|
perf tool: Add cgroup support
This patch adds the ability to filter monitoring based on container groups
(cgroups) for both perf stat and perf record. It is possible to monitor
multiple cgroup in parallel. There is one cgroup per event. The cgroups to
monitor are passed via a new -G option followed by a comma separated list of
cgroup names.
The cgroup filesystem has to be mounted. Given a cgroup name, the perf tool
finds the corresponding directory in the cgroup filesystem and opens it. It
then passes that file descriptor to the kernel.
Example:
$ perf stat -B -a -e cycles:u,cycles:u,cycles:u -G test1,,test2 -- sleep 1
Performance counter stats for 'sleep 1':
2,368,667,414 cycles test1
2,369,661,459 cycles
<not counted> cycles test2
1.001856890 seconds time elapsed
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <4d590290.825bdf0a.7d0a.4890@mx.google.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-02-14 17:20:01 +08:00
|
|
|
}
|
2015-07-21 17:44:04 +08:00
|
|
|
if (rec->opts.record_switch_events &&
|
|
|
|
!perf_can_record_switch_events()) {
|
2015-10-24 23:49:27 +08:00
|
|
|
ui__error("kernel does not support recording context switch events\n");
|
|
|
|
parse_options_usage(record_usage, record_options, "switch-events", 0);
|
|
|
|
return -EINVAL;
|
2015-07-21 17:44:04 +08:00
|
|
|
}
|
perf tool: Add cgroup support
This patch adds the ability to filter monitoring based on container groups
(cgroups) for both perf stat and perf record. It is possible to monitor
multiple cgroup in parallel. There is one cgroup per event. The cgroups to
monitor are passed via a new -G option followed by a comma separated list of
cgroup names.
The cgroup filesystem has to be mounted. Given a cgroup name, the perf tool
finds the corresponding directory in the cgroup filesystem and opens it. It
then passes that file descriptor to the kernel.
Example:
$ perf stat -B -a -e cycles:u,cycles:u,cycles:u -G test1,,test2 -- sleep 1
Performance counter stats for 'sleep 1':
2,368,667,414 cycles test1
2,369,661,459 cycles
<not counted> cycles test2
1.001856890 seconds time elapsed
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <4d590290.825bdf0a.7d0a.4890@mx.google.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-02-14 17:20:01 +08:00
|
|
|
|
2016-04-21 02:59:51 +08:00
|
|
|
if (rec->switch_output)
|
|
|
|
rec->timestamp_filename = true;
|
|
|
|
|
2015-04-09 23:53:45 +08:00
|
|
|
if (!rec->itr) {
|
|
|
|
rec->itr = auxtrace_record__init(rec->evlist, &err);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2015-04-30 22:37:32 +08:00
|
|
|
err = auxtrace_parse_snapshot_options(rec->itr, &rec->opts,
|
|
|
|
rec->opts.auxtrace_snapshot_opts);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
|
2016-06-16 16:02:41 +08:00
|
|
|
if (dry_run)
|
|
|
|
return 0;
|
|
|
|
|
2016-04-08 23:07:24 +08:00
|
|
|
err = bpf__setup_stdout(rec->evlist);
|
|
|
|
if (err) {
|
|
|
|
bpf__strerror_setup_stdout(rec->evlist, err, errbuf, sizeof(errbuf));
|
|
|
|
pr_err("ERROR: Setup BPF stdout failed: %s\n",
|
|
|
|
errbuf);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2015-04-09 23:53:45 +08:00
|
|
|
err = -ENOMEM;
|
|
|
|
|
2014-08-12 14:40:45 +08:00
|
|
|
symbol__init(NULL);
|
2010-11-27 05:39:15 +08:00
|
|
|
|
perf symbols: Handle /proc/sys/kernel/kptr_restrict
Perf uses /proc/modules to figure out where kernel modules are loaded.
With the advent of kptr_restrict, non root users get zeroes for all module
start addresses.
So check if kptr_restrict is non zero and don't generate the syntethic
PERF_RECORD_MMAP events for them.
Warn the user about it in perf record and in perf report.
In perf report the reference relocation symbol being zero means that
kptr_restrict was set, thus /proc/kallsyms has only zeroed addresses, so don't
use it to fixup symbol addresses when using a valid kallsyms (in the buildid
cache) or vmlinux (in the vmlinux path) build-id located automatically or
specified by the user.
Provide an explanation about it in 'perf report' if kernel samples were taken,
checking if a suitable vmlinux or kallsyms was found/specified.
Restricted /proc/kallsyms don't go to the buildid cache anymore.
Example:
[acme@emilia ~]$ perf record -F 100000 sleep 1
WARNING: Kernel address maps (/proc/{kallsyms,modules}) are restricted, check
/proc/sys/kernel/kptr_restrict.
Samples in kernel functions may not be resolved if a suitable vmlinux file is
not found in the buildid cache or in the vmlinux path.
Samples in kernel modules won't be resolved at all.
If some relocation was applied (e.g. kexec) symbols may be misresolved even
with a suitable vmlinux or kallsyms file.
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.005 MB perf.data (~231 samples) ]
[acme@emilia ~]$
[acme@emilia ~]$ perf report --stdio
Kernel address maps (/proc/{kallsyms,modules}) were restricted,
check /proc/sys/kernel/kptr_restrict before running 'perf record'.
If some relocation was applied (e.g. kexec) symbols may be misresolved.
Samples in kernel modules can't be resolved as well.
# Events: 13 cycles
#
# Overhead Command Shared Object Symbol
# ........ ....... ................. .....................
#
20.24% sleep [kernel.kallsyms] [k] page_fault
20.04% sleep [kernel.kallsyms] [k] filemap_fault
19.78% sleep [kernel.kallsyms] [k] __lru_cache_add
19.69% sleep ld-2.12.so [.] memcpy
14.71% sleep [kernel.kallsyms] [k] dput
4.70% sleep [kernel.kallsyms] [k] flush_signal_handlers
0.73% sleep [kernel.kallsyms] [k] perf_event_comm
0.11% sleep [kernel.kallsyms] [k] native_write_msr_safe
#
# (For a higher level overview, try: perf report --sort comm,dso)
#
[acme@emilia ~]$
This is because it found a suitable vmlinux (build-id checked) in
/lib/modules/2.6.39-rc7+/build/vmlinux (use -v in perf report to see the long
file name).
If we remove that file from the vmlinux path:
[root@emilia ~]# mv /lib/modules/2.6.39-rc7+/build/vmlinux \
/lib/modules/2.6.39-rc7+/build/vmlinux.OFF
[acme@emilia ~]$ perf report --stdio
[kernel.kallsyms] with build id 57298cdbe0131f6871667ec0eaab4804dcf6f562
not found, continuing without symbols
Kernel address maps (/proc/{kallsyms,modules}) were restricted, check
/proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples can't be
resolved.
Samples in kernel modules can't be resolved as well.
# Events: 13 cycles
#
# Overhead Command Shared Object Symbol
# ........ ....... ................. ......
#
80.31% sleep [kernel.kallsyms] [k] 0xffffffff8103425a
19.69% sleep ld-2.12.so [.] memcpy
#
# (For a higher level overview, try: perf report --sort comm,dso)
#
[acme@emilia ~]$
Reported-by: Stephane Eranian <eranian@google.com>
Suggested-by: David Miller <davem@davemloft.net>
Cc: Dave Jones <davej@redhat.com>
Cc: David Miller <davem@davemloft.net>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Kees Cook <kees.cook@canonical.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Tom Zanussi <tzanussi@gmail.com>
Link: http://lkml.kernel.org/n/tip-mt512joaxxbhhp1odop04yit@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2011-05-26 20:53:51 +08:00
|
|
|
if (symbol_conf.kptr_restrict)
|
2011-05-27 22:00:41 +08:00
|
|
|
pr_warning(
|
|
|
|
"WARNING: Kernel address maps (/proc/{kallsyms,modules}) are restricted,\n"
|
|
|
|
"check /proc/sys/kernel/kptr_restrict.\n\n"
|
|
|
|
"Samples in kernel functions may not be resolved if a suitable vmlinux\n"
|
|
|
|
"file is not found in the buildid cache or in the vmlinux path.\n\n"
|
|
|
|
"Samples in kernel modules won't be resolved at all.\n\n"
|
|
|
|
"If some relocation was applied (e.g. kexec) symbols may be misresolved\n"
|
|
|
|
"even with a suitable vmlinux or kallsyms file.\n\n");
|
perf symbols: Handle /proc/sys/kernel/kptr_restrict
Perf uses /proc/modules to figure out where kernel modules are loaded.
With the advent of kptr_restrict, non root users get zeroes for all module
start addresses.
So check if kptr_restrict is non zero and don't generate the syntethic
PERF_RECORD_MMAP events for them.
Warn the user about it in perf record and in perf report.
In perf report the reference relocation symbol being zero means that
kptr_restrict was set, thus /proc/kallsyms has only zeroed addresses, so don't
use it to fixup symbol addresses when using a valid kallsyms (in the buildid
cache) or vmlinux (in the vmlinux path) build-id located automatically or
specified by the user.
Provide an explanation about it in 'perf report' if kernel samples were taken,
checking if a suitable vmlinux or kallsyms was found/specified.
Restricted /proc/kallsyms don't go to the buildid cache anymore.
Example:
[acme@emilia ~]$ perf record -F 100000 sleep 1
WARNING: Kernel address maps (/proc/{kallsyms,modules}) are restricted, check
/proc/sys/kernel/kptr_restrict.
Samples in kernel functions may not be resolved if a suitable vmlinux file is
not found in the buildid cache or in the vmlinux path.
Samples in kernel modules won't be resolved at all.
If some relocation was applied (e.g. kexec) symbols may be misresolved even
with a suitable vmlinux or kallsyms file.
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.005 MB perf.data (~231 samples) ]
[acme@emilia ~]$
[acme@emilia ~]$ perf report --stdio
Kernel address maps (/proc/{kallsyms,modules}) were restricted,
check /proc/sys/kernel/kptr_restrict before running 'perf record'.
If some relocation was applied (e.g. kexec) symbols may be misresolved.
Samples in kernel modules can't be resolved as well.
# Events: 13 cycles
#
# Overhead Command Shared Object Symbol
# ........ ....... ................. .....................
#
20.24% sleep [kernel.kallsyms] [k] page_fault
20.04% sleep [kernel.kallsyms] [k] filemap_fault
19.78% sleep [kernel.kallsyms] [k] __lru_cache_add
19.69% sleep ld-2.12.so [.] memcpy
14.71% sleep [kernel.kallsyms] [k] dput
4.70% sleep [kernel.kallsyms] [k] flush_signal_handlers
0.73% sleep [kernel.kallsyms] [k] perf_event_comm
0.11% sleep [kernel.kallsyms] [k] native_write_msr_safe
#
# (For a higher level overview, try: perf report --sort comm,dso)
#
[acme@emilia ~]$
This is because it found a suitable vmlinux (build-id checked) in
/lib/modules/2.6.39-rc7+/build/vmlinux (use -v in perf report to see the long
file name).
If we remove that file from the vmlinux path:
[root@emilia ~]# mv /lib/modules/2.6.39-rc7+/build/vmlinux \
/lib/modules/2.6.39-rc7+/build/vmlinux.OFF
[acme@emilia ~]$ perf report --stdio
[kernel.kallsyms] with build id 57298cdbe0131f6871667ec0eaab4804dcf6f562
not found, continuing without symbols
Kernel address maps (/proc/{kallsyms,modules}) were restricted, check
/proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples can't be
resolved.
Samples in kernel modules can't be resolved as well.
# Events: 13 cycles
#
# Overhead Command Shared Object Symbol
# ........ ....... ................. ......
#
80.31% sleep [kernel.kallsyms] [k] 0xffffffff8103425a
19.69% sleep ld-2.12.so [.] memcpy
#
# (For a higher level overview, try: perf report --sort comm,dso)
#
[acme@emilia ~]$
Reported-by: Stephane Eranian <eranian@google.com>
Suggested-by: David Miller <davem@davemloft.net>
Cc: Dave Jones <davej@redhat.com>
Cc: David Miller <davem@davemloft.net>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Kees Cook <kees.cook@canonical.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Tom Zanussi <tzanussi@gmail.com>
Link: http://lkml.kernel.org/n/tip-mt512joaxxbhhp1odop04yit@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2011-05-26 20:53:51 +08:00
|
|
|
|
2016-04-21 02:59:52 +08:00
|
|
|
if (rec->no_buildid_cache || rec->no_buildid) {
|
2010-06-17 17:39:01 +08:00
|
|
|
disable_buildid_cache();
|
2016-04-21 02:59:52 +08:00
|
|
|
} else if (rec->switch_output) {
|
|
|
|
/*
|
|
|
|
* In 'perf record --switch-output', disable buildid
|
|
|
|
* generation by default to reduce data file switching
|
|
|
|
* overhead. Still generate buildid if they are required
|
|
|
|
* explicitly using
|
|
|
|
*
|
|
|
|
* perf record --signal-trigger --no-no-buildid \
|
|
|
|
* --no-no-buildid-cache
|
|
|
|
*
|
|
|
|
* Following code equals to:
|
|
|
|
*
|
|
|
|
* if ((rec->no_buildid || !rec->no_buildid_set) &&
|
|
|
|
* (rec->no_buildid_cache || !rec->no_buildid_cache_set))
|
|
|
|
* disable_buildid_cache();
|
|
|
|
*/
|
|
|
|
bool disable = true;
|
|
|
|
|
|
|
|
if (rec->no_buildid_set && !rec->no_buildid)
|
|
|
|
disable = false;
|
|
|
|
if (rec->no_buildid_cache_set && !rec->no_buildid_cache)
|
|
|
|
disable = false;
|
|
|
|
if (disable) {
|
|
|
|
rec->no_buildid = true;
|
|
|
|
rec->no_buildid_cache = true;
|
|
|
|
disable_buildid_cache();
|
|
|
|
}
|
|
|
|
}
|
2009-12-16 06:04:40 +08:00
|
|
|
|
2014-01-04 02:03:26 +08:00
|
|
|
if (rec->evlist->nr_entries == 0 &&
|
|
|
|
perf_evlist__add_default(rec->evlist) < 0) {
|
2011-01-04 02:39:04 +08:00
|
|
|
pr_err("Not enough memory for event selector list\n");
|
|
|
|
goto out_symbol_exit;
|
2009-06-12 05:11:50 +08:00
|
|
|
}
|
2009-05-26 15:17:18 +08:00
|
|
|
|
2013-11-18 17:55:57 +08:00
|
|
|
if (rec->opts.target.tid && !rec->opts.no_inherit_set)
|
|
|
|
rec->opts.no_inherit = true;
|
|
|
|
|
2013-11-13 03:46:16 +08:00
|
|
|
err = target__validate(&rec->opts.target);
|
2012-05-07 13:09:02 +08:00
|
|
|
if (err) {
|
2013-11-13 03:46:16 +08:00
|
|
|
target__strerror(&rec->opts.target, err, errbuf, BUFSIZ);
|
2012-05-07 13:09:02 +08:00
|
|
|
ui__warning("%s", errbuf);
|
|
|
|
}
|
|
|
|
|
2013-11-13 03:46:16 +08:00
|
|
|
err = target__parse_uid(&rec->opts.target);
|
2012-05-07 13:09:02 +08:00
|
|
|
if (err) {
|
|
|
|
int saved_errno = errno;
|
2012-04-26 13:15:18 +08:00
|
|
|
|
2013-11-13 03:46:16 +08:00
|
|
|
target__strerror(&rec->opts.target, err, errbuf, BUFSIZ);
|
2012-05-29 12:22:57 +08:00
|
|
|
ui__error("%s", errbuf);
|
2012-05-07 13:09:02 +08:00
|
|
|
|
|
|
|
err = -saved_errno;
|
2013-03-15 13:48:51 +08:00
|
|
|
goto out_symbol_exit;
|
2012-05-07 13:09:02 +08:00
|
|
|
}
|
2012-01-20 00:08:15 +08:00
|
|
|
|
2012-05-07 13:09:02 +08:00
|
|
|
err = -ENOMEM;
|
2014-01-04 02:03:26 +08:00
|
|
|
if (perf_evlist__create_maps(rec->evlist, &rec->opts.target) < 0)
|
2011-01-13 00:28:51 +08:00
|
|
|
usage_with_options(record_usage, record_options);
|
2011-01-04 02:39:04 +08:00
|
|
|
|
2015-04-09 23:53:45 +08:00
|
|
|
err = auxtrace_record__options(rec->itr, rec->evlist, &rec->opts);
|
|
|
|
if (err)
|
|
|
|
goto out_symbol_exit;
|
|
|
|
|
2016-01-11 21:37:09 +08:00
|
|
|
/*
|
|
|
|
* We take all buildids when the file contains
|
|
|
|
* AUX area tracing data because we do not decode the
|
|
|
|
* trace because it would take too long.
|
|
|
|
*/
|
|
|
|
if (rec->opts.full_auxtrace)
|
|
|
|
rec->buildid_all = true;
|
|
|
|
|
2013-12-20 01:43:45 +08:00
|
|
|
if (record_opts__config(&rec->opts)) {
|
2010-07-30 01:08:55 +08:00
|
|
|
err = -EINVAL;
|
2014-01-04 02:56:06 +08:00
|
|
|
goto out_symbol_exit;
|
2009-10-12 13:56:03 +08:00
|
|
|
}
|
|
|
|
|
2011-11-25 18:19:45 +08:00
|
|
|
err = __cmd_record(&record, argc, argv);
|
2010-07-31 05:31:28 +08:00
|
|
|
out_symbol_exit:
|
2014-05-12 08:47:24 +08:00
|
|
|
perf_evlist__delete(rec->evlist);
|
2010-07-31 05:31:28 +08:00
|
|
|
symbol__exit();
|
2015-04-09 23:53:45 +08:00
|
|
|
auxtrace_record__free(rec->itr);
|
2010-07-30 01:08:55 +08:00
|
|
|
return err;
|
2009-05-26 15:17:18 +08:00
|
|
|
}
|
2015-04-30 22:37:32 +08:00
|
|
|
|
|
|
|
static void snapshot_sig_handler(int sig __maybe_unused)
|
|
|
|
{
|
2016-04-21 02:59:49 +08:00
|
|
|
if (trigger_is_ready(&auxtrace_snapshot_trigger)) {
|
|
|
|
trigger_hit(&auxtrace_snapshot_trigger);
|
|
|
|
auxtrace_record__snapshot_started = 1;
|
|
|
|
if (auxtrace_record__snapshot_start(record.itr))
|
|
|
|
trigger_error(&auxtrace_snapshot_trigger);
|
|
|
|
}
|
2016-04-21 02:59:50 +08:00
|
|
|
|
|
|
|
if (trigger_is_ready(&switch_output_trigger))
|
|
|
|
trigger_hit(&switch_output_trigger);
|
2015-04-30 22:37:32 +08:00
|
|
|
}
|