License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 22:07:57 +08:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
2010-03-26 06:59:00 +08:00
|
|
|
#include <dirent.h>
|
|
|
|
#include <errno.h>
|
|
|
|
#include <stdlib.h>
|
|
|
|
#include <stdio.h>
|
|
|
|
#include <string.h>
|
2019-08-27 09:39:15 +08:00
|
|
|
#include <linux/capability.h>
|
2017-04-17 22:39:06 +08:00
|
|
|
#include <linux/kernel.h>
|
2018-04-27 03:11:47 +08:00
|
|
|
#include <linux/mman.h>
|
2019-08-30 03:18:59 +08:00
|
|
|
#include <linux/string.h>
|
2019-03-05 22:47:48 +08:00
|
|
|
#include <linux/time64.h>
|
2010-03-26 06:59:00 +08:00
|
|
|
#include <sys/types.h>
|
|
|
|
#include <sys/stat.h>
|
|
|
|
#include <sys/param.h>
|
|
|
|
#include <fcntl.h>
|
|
|
|
#include <unistd.h>
|
2011-01-23 06:37:02 +08:00
|
|
|
#include <inttypes.h>
|
2016-08-26 03:09:21 +08:00
|
|
|
#include "annotate.h"
|
2010-05-20 23:15:33 +08:00
|
|
|
#include "build-id.h"
|
2019-08-27 09:39:15 +08:00
|
|
|
#include "cap.h"
|
2019-08-30 20:43:25 +08:00
|
|
|
#include "dso.h"
|
2019-09-03 21:56:06 +08:00
|
|
|
#include "util.h" // lsdir()
|
2010-07-21 01:42:52 +08:00
|
|
|
#include "debug.h"
|
2019-08-27 09:39:15 +08:00
|
|
|
#include "event.h"
|
2012-11-09 22:32:52 +08:00
|
|
|
#include "machine.h"
|
2019-01-27 20:42:37 +08:00
|
|
|
#include "map.h"
|
2009-05-29 01:55:04 +08:00
|
|
|
#include "symbol.h"
|
2019-08-31 02:09:54 +08:00
|
|
|
#include "map_symbol.h"
|
|
|
|
#include "mem-events.h"
|
2019-08-30 21:26:37 +08:00
|
|
|
#include "symsrc.h"
|
2010-03-26 06:59:00 +08:00
|
|
|
#include "strlist.h"
|
2015-03-24 23:52:41 +08:00
|
|
|
#include "intlist.h"
|
2017-07-06 09:48:08 +08:00
|
|
|
#include "namespaces.h"
|
2014-08-12 14:40:45 +08:00
|
|
|
#include "header.h"
|
2017-04-18 22:33:48 +08:00
|
|
|
#include "path.h"
|
tools perf: Move from sane_ctype.h obtained from git to the Linux's original
We got the sane_ctype.h headers from git and kept using it so far, but
since that code originally came from the kernel sources to the git
sources, perhaps its better to just use the one in the kernel, so that
we can leverage tools/perf/check_headers.sh to be notified when our copy
gets out of sync, i.e. when fixes or goodies are added to the code we've
copied.
This will help with things like tools/lib/string.c where we want to have
more things in common with the kernel, such as strim(), skip_spaces(),
etc so as to go on removing the things that we have in tools/perf/util/
and instead using the code in the kernel, indirectly and removing things
like EXPORT_SYMBOL(), etc, getting notified when fixes and improvements
are made to the original code.
Hopefully this also should help with reducing the difference of code
hosted in tools/ to the one in the kernel proper.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: https://lkml.kernel.org/n/tip-7k9868l713wqtgo01xxygn12@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-06-26 04:27:31 +08:00
|
|
|
#include <linux/ctype.h>
|
2019-07-04 22:32:27 +08:00
|
|
|
#include <linux/zalloc.h>
|
2009-05-29 01:55:04 +08:00
|
|
|
|
|
|
|
#include <elf.h>
|
2009-11-19 06:20:52 +08:00
|
|
|
#include <limits.h>
|
2013-12-11 20:15:00 +08:00
|
|
|
#include <symbol/kallsyms.h>
|
2009-10-02 14:29:58 +08:00
|
|
|
#include <sys/utsname.h>
|
2009-08-05 20:05:16 +08:00
|
|
|
|
2016-09-02 06:25:52 +08:00
|
|
|
static int dso__load_kernel_sym(struct dso *dso, struct map *map);
|
|
|
|
static int dso__load_guest_kernel_sym(struct dso *dso, struct map *map);
|
2016-09-02 04:54:31 +08:00
|
|
|
static bool symbol__is_idle(const char *name);
|
|
|
|
|
2012-12-08 04:39:39 +08:00
|
|
|
int vmlinux_path__nr_entries;
|
|
|
|
char **vmlinux_path;
|
2009-10-02 14:29:58 +08:00
|
|
|
|
2009-12-16 06:04:39 +08:00
|
|
|
struct symbol_conf symbol_conf = {
|
2019-03-05 22:47:47 +08:00
|
|
|
.nanosecs = false,
|
2013-12-24 15:19:25 +08:00
|
|
|
.use_modules = true,
|
|
|
|
.try_vmlinux_path = true,
|
|
|
|
.demangle = true,
|
2014-09-13 12:15:05 +08:00
|
|
|
.demangle_kernel = false,
|
2013-12-24 15:19:25 +08:00
|
|
|
.cumulate_callchain = true,
|
2019-03-05 22:47:48 +08:00
|
|
|
.time_quantum = 100 * NSEC_PER_MSEC, /* 100ms */
|
2014-06-28 00:26:58 +08:00
|
|
|
.show_hist_headers = true,
|
2013-12-24 15:19:25 +08:00
|
|
|
.symfs = "",
|
2015-11-29 22:24:17 +08:00
|
|
|
.event_group = true,
|
2017-10-19 19:38:36 +08:00
|
|
|
.inline_name = true,
|
2019-03-11 22:44:58 +08:00
|
|
|
.res_sample = 0,
|
2009-11-24 22:05:15 +08:00
|
|
|
};
|
|
|
|
|
2012-07-22 20:14:32 +08:00
|
|
|
static enum dso_binary_type binary_type_symtab[] = {
|
|
|
|
DSO_BINARY_TYPE__KALLSYMS,
|
|
|
|
DSO_BINARY_TYPE__GUEST_KALLSYMS,
|
|
|
|
DSO_BINARY_TYPE__JAVA_JIT,
|
|
|
|
DSO_BINARY_TYPE__DEBUGLINK,
|
|
|
|
DSO_BINARY_TYPE__BUILD_ID_CACHE,
|
2017-07-06 09:48:13 +08:00
|
|
|
DSO_BINARY_TYPE__BUILD_ID_CACHE_DEBUGINFO,
|
2012-07-22 20:14:32 +08:00
|
|
|
DSO_BINARY_TYPE__FEDORA_DEBUGINFO,
|
|
|
|
DSO_BINARY_TYPE__UBUNTU_DEBUGINFO,
|
|
|
|
DSO_BINARY_TYPE__BUILDID_DEBUGINFO,
|
|
|
|
DSO_BINARY_TYPE__SYSTEM_PATH_DSO,
|
|
|
|
DSO_BINARY_TYPE__GUEST_KMODULE,
|
2014-11-04 09:14:27 +08:00
|
|
|
DSO_BINARY_TYPE__GUEST_KMODULE_COMP,
|
2012-07-22 20:14:32 +08:00
|
|
|
DSO_BINARY_TYPE__SYSTEM_PATH_KMODULE,
|
2014-11-04 09:14:27 +08:00
|
|
|
DSO_BINARY_TYPE__SYSTEM_PATH_KMODULE_COMP,
|
2013-09-18 21:56:14 +08:00
|
|
|
DSO_BINARY_TYPE__OPENEMBEDDED_DEBUGINFO,
|
2020-05-26 23:52:07 +08:00
|
|
|
DSO_BINARY_TYPE__MIXEDUP_UBUNTU_DEBUGINFO,
|
2012-07-22 20:14:32 +08:00
|
|
|
DSO_BINARY_TYPE__NOT_FOUND,
|
|
|
|
};
|
|
|
|
|
2012-08-01 20:47:57 +08:00
|
|
|
#define DSO_BINARY_TYPE__SYMTAB_CNT ARRAY_SIZE(binary_type_symtab)
|
2012-07-22 20:14:32 +08:00
|
|
|
|
2018-04-27 03:52:34 +08:00
|
|
|
static bool symbol_type__filter(char symbol_type)
|
2009-12-12 00:50:37 +08:00
|
|
|
{
|
2011-08-24 14:40:16 +08:00
|
|
|
symbol_type = toupper(symbol_type);
|
2018-06-06 04:06:57 +08:00
|
|
|
return symbol_type == 'T' || symbol_type == 'W' || symbol_type == 'D' || symbol_type == 'B';
|
2009-12-12 00:50:37 +08:00
|
|
|
}
|
|
|
|
|
2011-08-24 14:40:17 +08:00
|
|
|
static int prefix_underscores_count(const char *str)
|
|
|
|
{
|
|
|
|
const char *tail = str;
|
|
|
|
|
|
|
|
while (*tail == '_')
|
|
|
|
tail++;
|
|
|
|
|
|
|
|
return tail - str;
|
|
|
|
}
|
|
|
|
|
2017-12-09 00:28:12 +08:00
|
|
|
const char * __weak arch__normalize_symbol_name(const char *name)
|
|
|
|
{
|
|
|
|
return name;
|
|
|
|
}
|
|
|
|
|
perf symbols: Allow user probes on versioned symbols
Symbol versioning, as in glibc, results in symbols being defined as:
<real symbol>@[@]<version>
(Note that "@@" identifies a default symbol, if the symbol name is
repeated.)
perf is currently unable to deal with this, and is unable to create user
probes at such symbols:
--
$ nm /lib/powerpc64le-linux-gnu/libpthread.so.0 | grep pthread_create
0000000000008d30 t __pthread_create_2_1
0000000000008d30 T pthread_create@@GLIBC_2.17
$ /usr/bin/sudo perf probe -v -x /lib/powerpc64le-linux-gnu/libpthread.so.0 pthread_create
probe-definition(0): pthread_create
symbol:pthread_create file:(null) line:0 offset:0 return:0 lazy:(null)
0 arguments
Open Debuginfo file: /usr/lib/debug/lib/powerpc64le-linux-gnu/libpthread-2.19.so
Try to find probe point from debuginfo.
Probe point 'pthread_create' not found.
Error: Failed to add events. Reason: No such file or directory (Code: -2)
--
One is not able to specify the fully versioned symbol, either, due to
syntactic conflicts with other uses of "@" by perf:
--
$ /usr/bin/sudo perf probe -v -x /lib/powerpc64le-linux-gnu/libpthread.so.0 pthread_create@@GLIBC_2.17
probe-definition(0): pthread_create@@GLIBC_2.17
Semantic error :SRC@SRC is not allowed.
0 arguments
Error: Command Parse Error. Reason: Invalid argument (Code: -22)
--
This patch ignores versioning for default symbols, thus allowing probes to be
created for these symbols:
--
$ /usr/bin/sudo ./perf probe -x /lib/powerpc64le-linux-gnu/libpthread.so.0 pthread_create
Added new event:
probe_libpthread:pthread_create (on pthread_create in /lib/powerpc64le-linux-gnu/libpthread-2.19.so)
You can now use it in all perf tools, such as:
perf record -e probe_libpthread:pthread_create -aR sleep 1
$ /usr/bin/sudo ./perf record -e probe_libpthread:pthread_create -aR ./test 2
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.052 MB perf.data (2 samples) ]
$ /usr/bin/sudo ./perf script
test 2915 [000] 19124.260729: probe_libpthread:pthread_create: (3fff99248d38)
test 2916 [000] 19124.260962: probe_libpthread:pthread_create: (3fff99248d38)
$ /usr/bin/sudo ./perf probe --del=probe_libpthread:pthread_create
Removed event: probe_libpthread:pthread_create
--
Committer note:
Change the variable storing the result of strlen() to 'int', to fix the build
on debian:experimental-x-mipsel, fedora:24-x-ARC-uClibc, ubuntu:16.04-x-arm,
etc:
util/symbol.c: In function 'symbol__match_symbol_name':
util/symbol.c:422:11: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]
if (len < versioning - name)
^
Signed-off-by: Paul A. Clarke <pc@us.ibm.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Link: http://lkml.kernel.org/r/c2b18d9c-17f8-9285-4868-f58b6359ccac@us.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-04-26 02:15:49 +08:00
|
|
|
int __weak arch__compare_symbol_names(const char *namea, const char *nameb)
|
|
|
|
{
|
|
|
|
return strcmp(namea, nameb);
|
|
|
|
}
|
|
|
|
|
|
|
|
int __weak arch__compare_symbol_names_n(const char *namea, const char *nameb,
|
|
|
|
unsigned int n)
|
|
|
|
{
|
|
|
|
return strncmp(namea, nameb, n);
|
|
|
|
}
|
|
|
|
|
2015-04-28 20:05:36 +08:00
|
|
|
int __weak arch__choose_best_symbol(struct symbol *syma,
|
|
|
|
struct symbol *symb __maybe_unused)
|
|
|
|
{
|
|
|
|
/* Avoid "SyS" kernel syscall aliases */
|
|
|
|
if (strlen(syma->name) >= 3 && !strncmp(syma->name, "SyS", 3))
|
|
|
|
return SYMBOL_B;
|
|
|
|
if (strlen(syma->name) >= 10 && !strncmp(syma->name, "compat_SyS", 10))
|
|
|
|
return SYMBOL_B;
|
|
|
|
|
|
|
|
return SYMBOL_A;
|
|
|
|
}
|
2011-08-24 14:40:17 +08:00
|
|
|
|
|
|
|
static int choose_best_symbol(struct symbol *syma, struct symbol *symb)
|
|
|
|
{
|
|
|
|
s64 a;
|
|
|
|
s64 b;
|
2013-08-07 19:38:49 +08:00
|
|
|
size_t na, nb;
|
2011-08-24 14:40:17 +08:00
|
|
|
|
|
|
|
/* Prefer a symbol with non zero length */
|
|
|
|
a = syma->end - syma->start;
|
|
|
|
b = symb->end - symb->start;
|
|
|
|
if ((b == 0) && (a > 0))
|
|
|
|
return SYMBOL_A;
|
|
|
|
else if ((a == 0) && (b > 0))
|
|
|
|
return SYMBOL_B;
|
|
|
|
|
|
|
|
/* Prefer a non weak symbol over a weak one */
|
|
|
|
a = syma->binding == STB_WEAK;
|
|
|
|
b = symb->binding == STB_WEAK;
|
|
|
|
if (b && !a)
|
|
|
|
return SYMBOL_A;
|
|
|
|
if (a && !b)
|
|
|
|
return SYMBOL_B;
|
|
|
|
|
|
|
|
/* Prefer a global symbol over a non global one */
|
|
|
|
a = syma->binding == STB_GLOBAL;
|
|
|
|
b = symb->binding == STB_GLOBAL;
|
|
|
|
if (a && !b)
|
|
|
|
return SYMBOL_A;
|
|
|
|
if (b && !a)
|
|
|
|
return SYMBOL_B;
|
|
|
|
|
|
|
|
/* Prefer a symbol with less underscores */
|
|
|
|
a = prefix_underscores_count(syma->name);
|
|
|
|
b = prefix_underscores_count(symb->name);
|
|
|
|
if (b > a)
|
|
|
|
return SYMBOL_A;
|
|
|
|
else if (a > b)
|
|
|
|
return SYMBOL_B;
|
|
|
|
|
2013-08-07 19:38:49 +08:00
|
|
|
/* Choose the symbol with the longest name */
|
|
|
|
na = strlen(syma->name);
|
|
|
|
nb = strlen(symb->name);
|
|
|
|
if (na > nb)
|
2011-08-24 14:40:17 +08:00
|
|
|
return SYMBOL_A;
|
2013-08-07 19:38:49 +08:00
|
|
|
else if (na < nb)
|
2011-08-24 14:40:17 +08:00
|
|
|
return SYMBOL_B;
|
2013-08-07 19:38:49 +08:00
|
|
|
|
2015-04-28 20:05:36 +08:00
|
|
|
return arch__choose_best_symbol(syma, symb);
|
2011-08-24 14:40:17 +08:00
|
|
|
}
|
|
|
|
|
2018-12-07 03:18:17 +08:00
|
|
|
void symbols__fixup_duplicate(struct rb_root_cached *symbols)
|
2011-08-24 14:40:17 +08:00
|
|
|
{
|
|
|
|
struct rb_node *nd;
|
|
|
|
struct symbol *curr, *next;
|
|
|
|
|
2016-09-01 21:56:06 +08:00
|
|
|
if (symbol_conf.allow_aliases)
|
|
|
|
return;
|
|
|
|
|
2018-12-07 03:18:17 +08:00
|
|
|
nd = rb_first_cached(symbols);
|
2011-08-24 14:40:17 +08:00
|
|
|
|
|
|
|
while (nd) {
|
|
|
|
curr = rb_entry(nd, struct symbol, rb_node);
|
|
|
|
again:
|
|
|
|
nd = rb_next(&curr->rb_node);
|
|
|
|
next = rb_entry(nd, struct symbol, rb_node);
|
|
|
|
|
|
|
|
if (!nd)
|
|
|
|
break;
|
|
|
|
|
|
|
|
if (curr->start != next->start)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (choose_best_symbol(curr, next) == SYMBOL_A) {
|
2023-01-31 21:16:20 +08:00
|
|
|
if (next->type == STT_GNU_IFUNC)
|
|
|
|
curr->ifunc_alias = true;
|
2018-12-07 03:18:17 +08:00
|
|
|
rb_erase_cached(&next->rb_node, symbols);
|
2013-10-11 08:27:59 +08:00
|
|
|
symbol__delete(next);
|
2011-08-24 14:40:17 +08:00
|
|
|
goto again;
|
|
|
|
} else {
|
2023-01-31 21:16:20 +08:00
|
|
|
if (curr->type == STT_GNU_IFUNC)
|
|
|
|
next->ifunc_alias = true;
|
2011-08-24 14:40:17 +08:00
|
|
|
nd = rb_next(&curr->rb_node);
|
2018-12-07 03:18:17 +08:00
|
|
|
rb_erase_cached(&curr->rb_node, symbols);
|
2013-10-11 08:27:59 +08:00
|
|
|
symbol__delete(curr);
|
2011-08-24 14:40:17 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-04-16 08:40:47 +08:00
|
|
|
/* Update zero-sized symbols using the address of the next symbol */
|
|
|
|
void symbols__fixup_end(struct rb_root_cached *symbols, bool is_kallsyms)
|
2009-10-06 01:26:17 +08:00
|
|
|
{
|
2018-12-07 03:18:17 +08:00
|
|
|
struct rb_node *nd, *prevnd = rb_first_cached(symbols);
|
2009-10-08 00:48:56 +08:00
|
|
|
struct symbol *curr, *prev;
|
2009-10-06 01:26:17 +08:00
|
|
|
|
|
|
|
if (prevnd == NULL)
|
|
|
|
return;
|
|
|
|
|
2009-10-08 00:48:56 +08:00
|
|
|
curr = rb_entry(prevnd, struct symbol, rb_node);
|
|
|
|
|
2009-10-06 01:26:17 +08:00
|
|
|
for (nd = rb_next(prevnd); nd; nd = rb_next(nd)) {
|
2009-10-08 00:48:56 +08:00
|
|
|
prev = curr;
|
|
|
|
curr = rb_entry(nd, struct symbol, rb_node);
|
2009-10-06 01:26:17 +08:00
|
|
|
|
2022-04-16 08:40:47 +08:00
|
|
|
/*
|
|
|
|
* On some architecture kernel text segment start is located at
|
|
|
|
* some low memory address, while modules are located at high
|
|
|
|
* memory addresses (or vice versa). The gap between end of
|
|
|
|
* kernel text segment and beginning of first module's text
|
|
|
|
* segment is very big. Therefore do not fill this gap and do
|
|
|
|
* not assign it to the kernel dso map (kallsyms).
|
|
|
|
*
|
|
|
|
* In kallsyms, it determines module symbols using '[' character
|
|
|
|
* like in:
|
|
|
|
* ffffffffc1937000 T hdmi_driver_init [snd_hda_codec_hdmi]
|
|
|
|
*/
|
|
|
|
if (prev->end == prev->start) {
|
|
|
|
/* Last kernel/module symbol mapped to end of page */
|
|
|
|
if (is_kallsyms && (!strchr(prev->name, '[') !=
|
|
|
|
!strchr(curr->name, '[')))
|
|
|
|
prev->end = roundup(prev->end + 4096, 4096);
|
|
|
|
else
|
|
|
|
prev->end = curr->start;
|
|
|
|
|
|
|
|
pr_debug4("%s sym:%s end:%#" PRIx64 "\n",
|
|
|
|
__func__, prev->name, prev->end);
|
|
|
|
}
|
2009-10-06 01:26:17 +08:00
|
|
|
}
|
2009-10-08 00:48:56 +08:00
|
|
|
|
|
|
|
/* Last entry */
|
|
|
|
if (curr->end == curr->start)
|
perf symbols: Fix symbols__fixup_end heuristic for corner cases
The current symbols__fixup_end() heuristic for the last entry in the rb
tree is suboptimal as it leads to not being able to recognize the symbol
in the call graph in a couple of corner cases, for example:
i) If the symbol has a start address (f.e. exposed via kallsyms)
that is at a page boundary, then the roundup(curr->start, 4096)
for the last entry will result in curr->start == curr->end with
a symbol length of zero.
ii) If the symbol has a start address that is shortly before a page
boundary, then also here, curr->end - curr->start will just be
very few bytes, where it's unrealistic that we could perform a
match against.
Instead, change the heuristic to roundup(curr->start, 4096) + 4096, so
that we can catch such corner cases and have a better chance to find
that specific symbol. It's still just best effort as the real end of the
symbol is unknown to us (and could even be at a larger offset than the
current range), but better than the current situation.
Alexei reported that he recently run into case i) with a JITed eBPF
program (these are all page aligned) as the last symbol which wasn't
properly shown in the call graph (while other eBPF program symbols in
the rb tree were displayed correctly). Since this is a generic issue,
lets try to improve the heuristic a bit.
Reported-and-Tested-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Fixes: 2e538c4a1847 ("perf tools: Improve kernel/modules symbol lookup")
Link: http://lkml.kernel.org/r/bb5c80d27743be6f12afc68405f1956a330e1bc9.1489614365.git.daniel@iogearbox.net
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-03-16 05:53:37 +08:00
|
|
|
curr->end = roundup(curr->start, 4096) + 4096;
|
2009-10-06 01:26:17 +08:00
|
|
|
}
|
|
|
|
|
2019-11-26 09:21:28 +08:00
|
|
|
void maps__fixup_end(struct maps *maps)
|
2009-10-06 01:26:17 +08:00
|
|
|
{
|
2019-10-28 22:31:38 +08:00
|
|
|
struct map *prev = NULL, *curr;
|
2009-10-06 01:26:17 +08:00
|
|
|
|
2017-04-05 00:15:04 +08:00
|
|
|
down_write(&maps->lock);
|
2015-05-23 00:45:24 +08:00
|
|
|
|
2019-10-28 22:31:38 +08:00
|
|
|
maps__for_each_entry(maps, curr) {
|
|
|
|
if (prev != NULL && !prev->end)
|
|
|
|
prev->end = curr->start;
|
2009-10-06 01:26:17 +08:00
|
|
|
|
2019-10-28 22:31:38 +08:00
|
|
|
prev = curr;
|
2009-10-08 00:48:56 +08:00
|
|
|
}
|
2009-11-22 00:31:24 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* We still haven't the actual symbols, so guess the
|
|
|
|
* last map final address.
|
|
|
|
*/
|
2019-10-28 22:31:38 +08:00
|
|
|
if (curr && !curr->end)
|
2017-08-03 21:49:02 +08:00
|
|
|
curr->end = ~0ULL;
|
2015-05-23 00:45:24 +08:00
|
|
|
|
2017-04-05 00:15:04 +08:00
|
|
|
up_write(&maps->lock);
|
2009-10-06 01:26:17 +08:00
|
|
|
}
|
|
|
|
|
2018-04-26 22:09:10 +08:00
|
|
|
struct symbol *symbol__new(u64 start, u64 len, u8 binding, u8 type, const char *name)
|
2009-05-29 01:55:04 +08:00
|
|
|
{
|
2009-05-29 01:55:13 +08:00
|
|
|
size_t namelen = strlen(name) + 1;
|
2011-03-31 21:56:28 +08:00
|
|
|
struct symbol *sym = calloc(1, (symbol_conf.priv_size +
|
|
|
|
sizeof(*sym) + namelen));
|
|
|
|
if (sym == NULL)
|
perf_counter tools: Add 'perf annotate' feature
Add new perf sub-command to display annotated source code:
$ perf annotate decode_tree_entry
------------------------------------------------
Percent | Source code & Disassembly of /home/mingo/git/git
------------------------------------------------
:
: /home/mingo/git/git: file format elf64-x86-64
:
:
: Disassembly of section .text:
:
: 00000000004a0da0 <decode_tree_entry>:
: *modep = mode;
: return str;
: }
:
: static void decode_tree_entry(struct tree_desc *desc, const char *buf, unsigned long size)
: {
3.82 : 4a0da0: 41 54 push %r12
: const char *path;
: unsigned int mode, len;
:
: if (size < 24 || buf[size - 21])
0.17 : 4a0da2: 48 83 fa 17 cmp $0x17,%rdx
: *modep = mode;
: return str;
: }
:
: static void decode_tree_entry(struct tree_desc *desc, const char *buf, unsigned long size)
: {
0.00 : 4a0da6: 49 89 fc mov %rdi,%r12
0.00 : 4a0da9: 55 push %rbp
3.37 : 4a0daa: 53 push %rbx
: const char *path;
: unsigned int mode, len;
:
: if (size < 24 || buf[size - 21])
0.08 : 4a0dab: 76 73 jbe 4a0e20 <decode_tree_entry+0x80>
0.00 : 4a0dad: 80 7c 16 eb 00 cmpb $0x0,-0x15(%rsi,%rdx,1)
3.48 : 4a0db2: 75 6c jne 4a0e20 <decode_tree_entry+0x80>
: static const char *get_mode(const char *str, unsigned int *modep)
: {
: unsigned char c;
: unsigned int mode = 0;
:
: if (*str == ' ')
1.94 : 4a0db4: 0f b6 06 movzbl (%rsi),%eax
0.39 : 4a0db7: 3c 20 cmp $0x20,%al
0.00 : 4a0db9: 74 65 je 4a0e20 <decode_tree_entry+0x80>
: return NULL;
:
: while ((c = *str++) != ' ') {
0.06 : 4a0dbb: 89 c2 mov %eax,%edx
: if (c < '0' || c > '7')
1.99 : 4a0dbd: 31 ed xor %ebp,%ebp
: unsigned int mode = 0;
:
: if (*str == ' ')
: return NULL;
:
: while ((c = *str++) != ' ') {
1.74 : 4a0dbf: 48 8d 5e 01 lea 0x1(%rsi),%rbx
: if (c < '0' || c > '7')
0.00 : 4a0dc3: 8d 42 d0 lea -0x30(%rdx),%eax
0.17 : 4a0dc6: 3c 07 cmp $0x7,%al
0.00 : 4a0dc8: 76 0d jbe 4a0dd7 <decode_tree_entry+0x37>
0.00 : 4a0dca: eb 54 jmp 4a0e20 <decode_tree_entry+0x80>
0.00 : 4a0dcc: 0f 1f 40 00 nopl 0x0(%rax)
16.57 : 4a0dd0: 8d 42 d0 lea -0x30(%rdx),%eax
0.14 : 4a0dd3: 3c 07 cmp $0x7,%al
0.00 : 4a0dd5: 77 49 ja 4a0e20 <decode_tree_entry+0x80>
: return NULL;
: mode = (mode << 3) + (c - '0');
3.12 : 4a0dd7: 0f b6 c2 movzbl %dl,%eax
: unsigned int mode = 0;
:
: if (*str == ' ')
: return NULL;
:
: while ((c = *str++) != ' ') {
0.00 : 4a0dda: 0f b6 13 movzbl (%rbx),%edx
16.74 : 4a0ddd: 48 83 c3 01 add $0x1,%rbx
: if (c < '0' || c > '7')
: return NULL;
: mode = (mode << 3) + (c - '0');
The first column is the percentage of samples that arrived on that
particular line - relative to the total cost of the function.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-06-06 21:48:52 +08:00
|
|
|
return NULL;
|
|
|
|
|
2016-08-26 03:09:21 +08:00
|
|
|
if (symbol_conf.priv_size) {
|
|
|
|
if (symbol_conf.init_annotation) {
|
|
|
|
struct annotation *notes = (void *)sym;
|
2021-11-12 11:51:24 +08:00
|
|
|
annotation__init(notes);
|
2016-08-26 03:09:21 +08:00
|
|
|
}
|
2011-03-31 21:56:28 +08:00
|
|
|
sym = ((void *)sym) + symbol_conf.priv_size;
|
2016-08-26 03:09:21 +08:00
|
|
|
}
|
2009-10-21 00:25:40 +08:00
|
|
|
|
2011-03-31 21:56:28 +08:00
|
|
|
sym->start = start;
|
2014-10-15 04:19:44 +08:00
|
|
|
sym->end = len ? start + len : start;
|
2018-04-26 22:09:10 +08:00
|
|
|
sym->type = type;
|
2011-03-31 21:56:28 +08:00
|
|
|
sym->binding = binding;
|
|
|
|
sym->namelen = namelen - 1;
|
2009-10-21 00:25:40 +08:00
|
|
|
|
2011-03-31 21:56:28 +08:00
|
|
|
pr_debug4("%s: %s %#" PRIx64 "-%#" PRIx64 "\n",
|
|
|
|
__func__, name, start, sym->end);
|
|
|
|
memcpy(sym->name, name, namelen);
|
2009-05-29 01:55:04 +08:00
|
|
|
|
2011-03-31 21:56:28 +08:00
|
|
|
return sym;
|
2009-05-29 01:55:04 +08:00
|
|
|
}
|
|
|
|
|
2011-03-31 21:56:28 +08:00
|
|
|
void symbol__delete(struct symbol *sym)
|
2009-05-29 01:55:04 +08:00
|
|
|
{
|
2021-11-12 11:51:24 +08:00
|
|
|
if (symbol_conf.priv_size) {
|
|
|
|
if (symbol_conf.init_annotation) {
|
|
|
|
struct annotation *notes = symbol__annotation(sym);
|
|
|
|
|
|
|
|
annotation__exit(notes);
|
|
|
|
}
|
|
|
|
}
|
2011-03-31 21:56:28 +08:00
|
|
|
free(((void *)sym) - symbol_conf.priv_size);
|
2009-05-29 01:55:04 +08:00
|
|
|
}
|
|
|
|
|
2018-12-07 03:18:17 +08:00
|
|
|
void symbols__delete(struct rb_root_cached *symbols)
|
2009-05-29 01:55:04 +08:00
|
|
|
{
|
|
|
|
struct symbol *pos;
|
2018-12-07 03:18:17 +08:00
|
|
|
struct rb_node *next = rb_first_cached(symbols);
|
2009-05-29 01:55:04 +08:00
|
|
|
|
|
|
|
while (next) {
|
|
|
|
pos = rb_entry(next, struct symbol, rb_node);
|
|
|
|
next = rb_next(&pos->rb_node);
|
2018-12-07 03:18:17 +08:00
|
|
|
rb_erase_cached(&pos->rb_node, symbols);
|
2009-10-31 02:28:24 +08:00
|
|
|
symbol__delete(pos);
|
2009-05-29 01:55:04 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-12-07 03:18:17 +08:00
|
|
|
void __symbols__insert(struct rb_root_cached *symbols,
|
|
|
|
struct symbol *sym, bool kernel)
|
2009-05-29 01:55:04 +08:00
|
|
|
{
|
2018-12-07 03:18:17 +08:00
|
|
|
struct rb_node **p = &symbols->rb_root.rb_node;
|
2009-05-29 01:55:04 +08:00
|
|
|
struct rb_node *parent = NULL;
|
perf_counter tools: Define and use our own u64, s64 etc. definitions
On 64-bit powerpc, __u64 is defined to be unsigned long rather than
unsigned long long. This causes compiler warnings every time we
print a __u64 value with %Lx.
Rather than changing __u64, we define our own u64 to be unsigned long
long on all architectures, and similarly s64 as signed long long.
For consistency we also define u32, s32, u16, s16, u8 and s8. These
definitions are put in a new header, types.h, because these definitions
are needed in util/string.h and util/symbol.h.
The main change here is the mechanical change of __[us]{64,32,16,8}
to remove the "__". The other changes are:
* Create types.h
* Include types.h in perf.h, util/string.h and util/symbol.h
* Add types.h to the LIB_H definition in Makefile
* Added (u64) casts in process_overflow_event() and print_sym_table()
to kill two remaining warnings.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: benh@kernel.crashing.org
LKML-Reference: <19003.33494.495844.956580@cargo.ozlabs.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-06-19 20:21:42 +08:00
|
|
|
const u64 ip = sym->start;
|
2009-05-29 01:55:04 +08:00
|
|
|
struct symbol *s;
|
2018-12-07 03:18:17 +08:00
|
|
|
bool leftmost = true;
|
2009-05-29 01:55:04 +08:00
|
|
|
|
2016-09-02 04:54:31 +08:00
|
|
|
if (kernel) {
|
|
|
|
const char *name = sym->name;
|
|
|
|
/*
|
|
|
|
* ppc64 uses function descriptors and appends a '.' to the
|
|
|
|
* start of every instruction address. Remove it.
|
|
|
|
*/
|
|
|
|
if (name[0] == '.')
|
|
|
|
name++;
|
|
|
|
sym->idle = symbol__is_idle(name);
|
|
|
|
}
|
|
|
|
|
2009-05-29 01:55:04 +08:00
|
|
|
while (*p != NULL) {
|
|
|
|
parent = *p;
|
|
|
|
s = rb_entry(parent, struct symbol, rb_node);
|
|
|
|
if (ip < s->start)
|
|
|
|
p = &(*p)->rb_left;
|
2018-12-07 03:18:17 +08:00
|
|
|
else {
|
2009-05-29 01:55:04 +08:00
|
|
|
p = &(*p)->rb_right;
|
2018-12-07 03:18:17 +08:00
|
|
|
leftmost = false;
|
|
|
|
}
|
2009-05-29 01:55:04 +08:00
|
|
|
}
|
|
|
|
rb_link_node(&sym->rb_node, parent, p);
|
2018-12-07 03:18:17 +08:00
|
|
|
rb_insert_color_cached(&sym->rb_node, symbols, leftmost);
|
2009-05-29 01:55:04 +08:00
|
|
|
}
|
|
|
|
|
2018-12-07 03:18:17 +08:00
|
|
|
void symbols__insert(struct rb_root_cached *symbols, struct symbol *sym)
|
2016-09-02 04:54:31 +08:00
|
|
|
{
|
|
|
|
__symbols__insert(symbols, sym, false);
|
|
|
|
}
|
|
|
|
|
2018-12-07 03:18:17 +08:00
|
|
|
static struct symbol *symbols__find(struct rb_root_cached *symbols, u64 ip)
|
2009-05-29 01:55:04 +08:00
|
|
|
{
|
|
|
|
struct rb_node *n;
|
|
|
|
|
2011-03-31 21:56:28 +08:00
|
|
|
if (symbols == NULL)
|
2009-05-29 01:55:04 +08:00
|
|
|
return NULL;
|
|
|
|
|
2018-12-07 03:18:17 +08:00
|
|
|
n = symbols->rb_root.rb_node;
|
2009-05-29 01:55:04 +08:00
|
|
|
|
|
|
|
while (n) {
|
|
|
|
struct symbol *s = rb_entry(n, struct symbol, rb_node);
|
|
|
|
|
|
|
|
if (ip < s->start)
|
|
|
|
n = n->rb_left;
|
2016-05-07 17:16:59 +08:00
|
|
|
else if (ip > s->end || (ip == s->end && ip != s->start))
|
2009-05-29 01:55:04 +08:00
|
|
|
n = n->rb_right;
|
|
|
|
else
|
|
|
|
return s;
|
|
|
|
}
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2018-12-07 03:18:17 +08:00
|
|
|
static struct symbol *symbols__first(struct rb_root_cached *symbols)
|
2013-08-07 19:38:51 +08:00
|
|
|
{
|
2018-12-07 03:18:17 +08:00
|
|
|
struct rb_node *n = rb_first_cached(symbols);
|
2013-08-07 19:38:51 +08:00
|
|
|
|
|
|
|
if (n)
|
|
|
|
return rb_entry(n, struct symbol, rb_node);
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2018-12-07 03:18:17 +08:00
|
|
|
static struct symbol *symbols__last(struct rb_root_cached *symbols)
|
2016-09-23 22:38:38 +08:00
|
|
|
{
|
2018-12-07 03:18:17 +08:00
|
|
|
struct rb_node *n = rb_last(&symbols->rb_root);
|
2016-09-23 22:38:38 +08:00
|
|
|
|
|
|
|
if (n)
|
|
|
|
return rb_entry(n, struct symbol, rb_node);
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2014-07-14 18:02:50 +08:00
|
|
|
static struct symbol *symbols__next(struct symbol *sym)
|
|
|
|
{
|
|
|
|
struct rb_node *n = rb_next(&sym->rb_node);
|
|
|
|
|
|
|
|
if (n)
|
|
|
|
return rb_entry(n, struct symbol, rb_node);
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2018-12-07 03:18:17 +08:00
|
|
|
static void symbols__insert_by_name(struct rb_root_cached *symbols, struct symbol *sym)
|
perf symbols: Allow lookups by symbol name too
Configurable via symbol_conf.sort_by_name, so that the cost of an
extra rb_node on all 'struct symbol' instances is not paid by tools
that only want to decode addresses.
How to use it:
symbol_conf.sort_by_name = true;
symbol_init(&symbol_conf);
struct map *map = map_groups__find_by_name(kmaps, MAP__VARIABLE, "[kernel.kallsyms]");
if (map == NULL) {
pr_err("couldn't find map!\n");
kernel_maps__fprintf(stdout);
} else {
struct symbol *sym = map__find_symbol_by_name(map, sym_filter, NULL);
if (sym == NULL)
pr_err("couldn't find symbol %s!\n", sym_filter);
else
pr_info("symbol %s: %#Lx-%#Lx \n", sym_filter, sym->start, sym->end);
}
Looking over the vmlinux/kallsyms is common enough that I'll add a
variable to the upcoming struct perf_session to avoid the need to
use map_groups__find_by_name to get the main vmlinux/kallsyms map.
The above example looks on the 'variable' symtab, but it is just
like that for the functions one.
Also the sort operation is done when we first use
map__find_symbol_by_name, in a lazy way.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1260564622-12392-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-12-12 04:50:22 +08:00
|
|
|
{
|
2018-12-07 03:18:17 +08:00
|
|
|
struct rb_node **p = &symbols->rb_root.rb_node;
|
perf symbols: Allow lookups by symbol name too
Configurable via symbol_conf.sort_by_name, so that the cost of an
extra rb_node on all 'struct symbol' instances is not paid by tools
that only want to decode addresses.
How to use it:
symbol_conf.sort_by_name = true;
symbol_init(&symbol_conf);
struct map *map = map_groups__find_by_name(kmaps, MAP__VARIABLE, "[kernel.kallsyms]");
if (map == NULL) {
pr_err("couldn't find map!\n");
kernel_maps__fprintf(stdout);
} else {
struct symbol *sym = map__find_symbol_by_name(map, sym_filter, NULL);
if (sym == NULL)
pr_err("couldn't find symbol %s!\n", sym_filter);
else
pr_info("symbol %s: %#Lx-%#Lx \n", sym_filter, sym->start, sym->end);
}
Looking over the vmlinux/kallsyms is common enough that I'll add a
variable to the upcoming struct perf_session to avoid the need to
use map_groups__find_by_name to get the main vmlinux/kallsyms map.
The above example looks on the 'variable' symtab, but it is just
like that for the functions one.
Also the sort operation is done when we first use
map__find_symbol_by_name, in a lazy way.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1260564622-12392-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-12-12 04:50:22 +08:00
|
|
|
struct rb_node *parent = NULL;
|
2010-11-24 00:38:18 +08:00
|
|
|
struct symbol_name_rb_node *symn, *s;
|
2018-12-07 03:18:17 +08:00
|
|
|
bool leftmost = true;
|
2010-11-24 00:38:18 +08:00
|
|
|
|
|
|
|
symn = container_of(sym, struct symbol_name_rb_node, sym);
|
perf symbols: Allow lookups by symbol name too
Configurable via symbol_conf.sort_by_name, so that the cost of an
extra rb_node on all 'struct symbol' instances is not paid by tools
that only want to decode addresses.
How to use it:
symbol_conf.sort_by_name = true;
symbol_init(&symbol_conf);
struct map *map = map_groups__find_by_name(kmaps, MAP__VARIABLE, "[kernel.kallsyms]");
if (map == NULL) {
pr_err("couldn't find map!\n");
kernel_maps__fprintf(stdout);
} else {
struct symbol *sym = map__find_symbol_by_name(map, sym_filter, NULL);
if (sym == NULL)
pr_err("couldn't find symbol %s!\n", sym_filter);
else
pr_info("symbol %s: %#Lx-%#Lx \n", sym_filter, sym->start, sym->end);
}
Looking over the vmlinux/kallsyms is common enough that I'll add a
variable to the upcoming struct perf_session to avoid the need to
use map_groups__find_by_name to get the main vmlinux/kallsyms map.
The above example looks on the 'variable' symtab, but it is just
like that for the functions one.
Also the sort operation is done when we first use
map__find_symbol_by_name, in a lazy way.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1260564622-12392-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-12-12 04:50:22 +08:00
|
|
|
|
|
|
|
while (*p != NULL) {
|
|
|
|
parent = *p;
|
|
|
|
s = rb_entry(parent, struct symbol_name_rb_node, rb_node);
|
|
|
|
if (strcmp(sym->name, s->sym.name) < 0)
|
|
|
|
p = &(*p)->rb_left;
|
2018-12-07 03:18:17 +08:00
|
|
|
else {
|
perf symbols: Allow lookups by symbol name too
Configurable via symbol_conf.sort_by_name, so that the cost of an
extra rb_node on all 'struct symbol' instances is not paid by tools
that only want to decode addresses.
How to use it:
symbol_conf.sort_by_name = true;
symbol_init(&symbol_conf);
struct map *map = map_groups__find_by_name(kmaps, MAP__VARIABLE, "[kernel.kallsyms]");
if (map == NULL) {
pr_err("couldn't find map!\n");
kernel_maps__fprintf(stdout);
} else {
struct symbol *sym = map__find_symbol_by_name(map, sym_filter, NULL);
if (sym == NULL)
pr_err("couldn't find symbol %s!\n", sym_filter);
else
pr_info("symbol %s: %#Lx-%#Lx \n", sym_filter, sym->start, sym->end);
}
Looking over the vmlinux/kallsyms is common enough that I'll add a
variable to the upcoming struct perf_session to avoid the need to
use map_groups__find_by_name to get the main vmlinux/kallsyms map.
The above example looks on the 'variable' symtab, but it is just
like that for the functions one.
Also the sort operation is done when we first use
map__find_symbol_by_name, in a lazy way.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1260564622-12392-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-12-12 04:50:22 +08:00
|
|
|
p = &(*p)->rb_right;
|
2018-12-07 03:18:17 +08:00
|
|
|
leftmost = false;
|
|
|
|
}
|
perf symbols: Allow lookups by symbol name too
Configurable via symbol_conf.sort_by_name, so that the cost of an
extra rb_node on all 'struct symbol' instances is not paid by tools
that only want to decode addresses.
How to use it:
symbol_conf.sort_by_name = true;
symbol_init(&symbol_conf);
struct map *map = map_groups__find_by_name(kmaps, MAP__VARIABLE, "[kernel.kallsyms]");
if (map == NULL) {
pr_err("couldn't find map!\n");
kernel_maps__fprintf(stdout);
} else {
struct symbol *sym = map__find_symbol_by_name(map, sym_filter, NULL);
if (sym == NULL)
pr_err("couldn't find symbol %s!\n", sym_filter);
else
pr_info("symbol %s: %#Lx-%#Lx \n", sym_filter, sym->start, sym->end);
}
Looking over the vmlinux/kallsyms is common enough that I'll add a
variable to the upcoming struct perf_session to avoid the need to
use map_groups__find_by_name to get the main vmlinux/kallsyms map.
The above example looks on the 'variable' symtab, but it is just
like that for the functions one.
Also the sort operation is done when we first use
map__find_symbol_by_name, in a lazy way.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1260564622-12392-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-12-12 04:50:22 +08:00
|
|
|
}
|
|
|
|
rb_link_node(&symn->rb_node, parent, p);
|
2018-12-07 03:18:17 +08:00
|
|
|
rb_insert_color_cached(&symn->rb_node, symbols, leftmost);
|
perf symbols: Allow lookups by symbol name too
Configurable via symbol_conf.sort_by_name, so that the cost of an
extra rb_node on all 'struct symbol' instances is not paid by tools
that only want to decode addresses.
How to use it:
symbol_conf.sort_by_name = true;
symbol_init(&symbol_conf);
struct map *map = map_groups__find_by_name(kmaps, MAP__VARIABLE, "[kernel.kallsyms]");
if (map == NULL) {
pr_err("couldn't find map!\n");
kernel_maps__fprintf(stdout);
} else {
struct symbol *sym = map__find_symbol_by_name(map, sym_filter, NULL);
if (sym == NULL)
pr_err("couldn't find symbol %s!\n", sym_filter);
else
pr_info("symbol %s: %#Lx-%#Lx \n", sym_filter, sym->start, sym->end);
}
Looking over the vmlinux/kallsyms is common enough that I'll add a
variable to the upcoming struct perf_session to avoid the need to
use map_groups__find_by_name to get the main vmlinux/kallsyms map.
The above example looks on the 'variable' symtab, but it is just
like that for the functions one.
Also the sort operation is done when we first use
map__find_symbol_by_name, in a lazy way.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1260564622-12392-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-12-12 04:50:22 +08:00
|
|
|
}
|
|
|
|
|
2018-12-07 03:18:17 +08:00
|
|
|
static void symbols__sort_by_name(struct rb_root_cached *symbols,
|
|
|
|
struct rb_root_cached *source)
|
perf symbols: Allow lookups by symbol name too
Configurable via symbol_conf.sort_by_name, so that the cost of an
extra rb_node on all 'struct symbol' instances is not paid by tools
that only want to decode addresses.
How to use it:
symbol_conf.sort_by_name = true;
symbol_init(&symbol_conf);
struct map *map = map_groups__find_by_name(kmaps, MAP__VARIABLE, "[kernel.kallsyms]");
if (map == NULL) {
pr_err("couldn't find map!\n");
kernel_maps__fprintf(stdout);
} else {
struct symbol *sym = map__find_symbol_by_name(map, sym_filter, NULL);
if (sym == NULL)
pr_err("couldn't find symbol %s!\n", sym_filter);
else
pr_info("symbol %s: %#Lx-%#Lx \n", sym_filter, sym->start, sym->end);
}
Looking over the vmlinux/kallsyms is common enough that I'll add a
variable to the upcoming struct perf_session to avoid the need to
use map_groups__find_by_name to get the main vmlinux/kallsyms map.
The above example looks on the 'variable' symtab, but it is just
like that for the functions one.
Also the sort operation is done when we first use
map__find_symbol_by_name, in a lazy way.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1260564622-12392-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-12-12 04:50:22 +08:00
|
|
|
{
|
|
|
|
struct rb_node *nd;
|
|
|
|
|
2018-12-07 03:18:17 +08:00
|
|
|
for (nd = rb_first_cached(source); nd; nd = rb_next(nd)) {
|
perf symbols: Allow lookups by symbol name too
Configurable via symbol_conf.sort_by_name, so that the cost of an
extra rb_node on all 'struct symbol' instances is not paid by tools
that only want to decode addresses.
How to use it:
symbol_conf.sort_by_name = true;
symbol_init(&symbol_conf);
struct map *map = map_groups__find_by_name(kmaps, MAP__VARIABLE, "[kernel.kallsyms]");
if (map == NULL) {
pr_err("couldn't find map!\n");
kernel_maps__fprintf(stdout);
} else {
struct symbol *sym = map__find_symbol_by_name(map, sym_filter, NULL);
if (sym == NULL)
pr_err("couldn't find symbol %s!\n", sym_filter);
else
pr_info("symbol %s: %#Lx-%#Lx \n", sym_filter, sym->start, sym->end);
}
Looking over the vmlinux/kallsyms is common enough that I'll add a
variable to the upcoming struct perf_session to avoid the need to
use map_groups__find_by_name to get the main vmlinux/kallsyms map.
The above example looks on the 'variable' symtab, but it is just
like that for the functions one.
Also the sort operation is done when we first use
map__find_symbol_by_name, in a lazy way.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1260564622-12392-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-12-12 04:50:22 +08:00
|
|
|
struct symbol *pos = rb_entry(nd, struct symbol, rb_node);
|
2011-03-31 21:56:28 +08:00
|
|
|
symbols__insert_by_name(symbols, pos);
|
perf symbols: Allow lookups by symbol name too
Configurable via symbol_conf.sort_by_name, so that the cost of an
extra rb_node on all 'struct symbol' instances is not paid by tools
that only want to decode addresses.
How to use it:
symbol_conf.sort_by_name = true;
symbol_init(&symbol_conf);
struct map *map = map_groups__find_by_name(kmaps, MAP__VARIABLE, "[kernel.kallsyms]");
if (map == NULL) {
pr_err("couldn't find map!\n");
kernel_maps__fprintf(stdout);
} else {
struct symbol *sym = map__find_symbol_by_name(map, sym_filter, NULL);
if (sym == NULL)
pr_err("couldn't find symbol %s!\n", sym_filter);
else
pr_info("symbol %s: %#Lx-%#Lx \n", sym_filter, sym->start, sym->end);
}
Looking over the vmlinux/kallsyms is common enough that I'll add a
variable to the upcoming struct perf_session to avoid the need to
use map_groups__find_by_name to get the main vmlinux/kallsyms map.
The above example looks on the 'variable' symtab, but it is just
like that for the functions one.
Also the sort operation is done when we first use
map__find_symbol_by_name, in a lazy way.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1260564622-12392-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-12-12 04:50:22 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
perf symbols: Allow user probes on versioned symbols
Symbol versioning, as in glibc, results in symbols being defined as:
<real symbol>@[@]<version>
(Note that "@@" identifies a default symbol, if the symbol name is
repeated.)
perf is currently unable to deal with this, and is unable to create user
probes at such symbols:
--
$ nm /lib/powerpc64le-linux-gnu/libpthread.so.0 | grep pthread_create
0000000000008d30 t __pthread_create_2_1
0000000000008d30 T pthread_create@@GLIBC_2.17
$ /usr/bin/sudo perf probe -v -x /lib/powerpc64le-linux-gnu/libpthread.so.0 pthread_create
probe-definition(0): pthread_create
symbol:pthread_create file:(null) line:0 offset:0 return:0 lazy:(null)
0 arguments
Open Debuginfo file: /usr/lib/debug/lib/powerpc64le-linux-gnu/libpthread-2.19.so
Try to find probe point from debuginfo.
Probe point 'pthread_create' not found.
Error: Failed to add events. Reason: No such file or directory (Code: -2)
--
One is not able to specify the fully versioned symbol, either, due to
syntactic conflicts with other uses of "@" by perf:
--
$ /usr/bin/sudo perf probe -v -x /lib/powerpc64le-linux-gnu/libpthread.so.0 pthread_create@@GLIBC_2.17
probe-definition(0): pthread_create@@GLIBC_2.17
Semantic error :SRC@SRC is not allowed.
0 arguments
Error: Command Parse Error. Reason: Invalid argument (Code: -22)
--
This patch ignores versioning for default symbols, thus allowing probes to be
created for these symbols:
--
$ /usr/bin/sudo ./perf probe -x /lib/powerpc64le-linux-gnu/libpthread.so.0 pthread_create
Added new event:
probe_libpthread:pthread_create (on pthread_create in /lib/powerpc64le-linux-gnu/libpthread-2.19.so)
You can now use it in all perf tools, such as:
perf record -e probe_libpthread:pthread_create -aR sleep 1
$ /usr/bin/sudo ./perf record -e probe_libpthread:pthread_create -aR ./test 2
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.052 MB perf.data (2 samples) ]
$ /usr/bin/sudo ./perf script
test 2915 [000] 19124.260729: probe_libpthread:pthread_create: (3fff99248d38)
test 2916 [000] 19124.260962: probe_libpthread:pthread_create: (3fff99248d38)
$ /usr/bin/sudo ./perf probe --del=probe_libpthread:pthread_create
Removed event: probe_libpthread:pthread_create
--
Committer note:
Change the variable storing the result of strlen() to 'int', to fix the build
on debian:experimental-x-mipsel, fedora:24-x-ARC-uClibc, ubuntu:16.04-x-arm,
etc:
util/symbol.c: In function 'symbol__match_symbol_name':
util/symbol.c:422:11: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]
if (len < versioning - name)
^
Signed-off-by: Paul A. Clarke <pc@us.ibm.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Link: http://lkml.kernel.org/r/c2b18d9c-17f8-9285-4868-f58b6359ccac@us.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-04-26 02:15:49 +08:00
|
|
|
int symbol__match_symbol_name(const char *name, const char *str,
|
|
|
|
enum symbol_tag_include includes)
|
|
|
|
{
|
|
|
|
const char *versioning;
|
|
|
|
|
|
|
|
if (includes == SYMBOL_TAG_INCLUDE__DEFAULT_ONLY &&
|
|
|
|
(versioning = strstr(name, "@@"))) {
|
|
|
|
int len = strlen(str);
|
|
|
|
|
|
|
|
if (len < versioning - name)
|
|
|
|
len = versioning - name;
|
|
|
|
|
|
|
|
return arch__compare_symbol_names_n(name, str, len);
|
|
|
|
} else
|
|
|
|
return arch__compare_symbol_names(name, str);
|
|
|
|
}
|
|
|
|
|
2018-12-07 03:18:17 +08:00
|
|
|
static struct symbol *symbols__find_by_name(struct rb_root_cached *symbols,
|
perf symbols: Allow user probes on versioned symbols
Symbol versioning, as in glibc, results in symbols being defined as:
<real symbol>@[@]<version>
(Note that "@@" identifies a default symbol, if the symbol name is
repeated.)
perf is currently unable to deal with this, and is unable to create user
probes at such symbols:
--
$ nm /lib/powerpc64le-linux-gnu/libpthread.so.0 | grep pthread_create
0000000000008d30 t __pthread_create_2_1
0000000000008d30 T pthread_create@@GLIBC_2.17
$ /usr/bin/sudo perf probe -v -x /lib/powerpc64le-linux-gnu/libpthread.so.0 pthread_create
probe-definition(0): pthread_create
symbol:pthread_create file:(null) line:0 offset:0 return:0 lazy:(null)
0 arguments
Open Debuginfo file: /usr/lib/debug/lib/powerpc64le-linux-gnu/libpthread-2.19.so
Try to find probe point from debuginfo.
Probe point 'pthread_create' not found.
Error: Failed to add events. Reason: No such file or directory (Code: -2)
--
One is not able to specify the fully versioned symbol, either, due to
syntactic conflicts with other uses of "@" by perf:
--
$ /usr/bin/sudo perf probe -v -x /lib/powerpc64le-linux-gnu/libpthread.so.0 pthread_create@@GLIBC_2.17
probe-definition(0): pthread_create@@GLIBC_2.17
Semantic error :SRC@SRC is not allowed.
0 arguments
Error: Command Parse Error. Reason: Invalid argument (Code: -22)
--
This patch ignores versioning for default symbols, thus allowing probes to be
created for these symbols:
--
$ /usr/bin/sudo ./perf probe -x /lib/powerpc64le-linux-gnu/libpthread.so.0 pthread_create
Added new event:
probe_libpthread:pthread_create (on pthread_create in /lib/powerpc64le-linux-gnu/libpthread-2.19.so)
You can now use it in all perf tools, such as:
perf record -e probe_libpthread:pthread_create -aR sleep 1
$ /usr/bin/sudo ./perf record -e probe_libpthread:pthread_create -aR ./test 2
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.052 MB perf.data (2 samples) ]
$ /usr/bin/sudo ./perf script
test 2915 [000] 19124.260729: probe_libpthread:pthread_create: (3fff99248d38)
test 2916 [000] 19124.260962: probe_libpthread:pthread_create: (3fff99248d38)
$ /usr/bin/sudo ./perf probe --del=probe_libpthread:pthread_create
Removed event: probe_libpthread:pthread_create
--
Committer note:
Change the variable storing the result of strlen() to 'int', to fix the build
on debian:experimental-x-mipsel, fedora:24-x-ARC-uClibc, ubuntu:16.04-x-arm,
etc:
util/symbol.c: In function 'symbol__match_symbol_name':
util/symbol.c:422:11: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]
if (len < versioning - name)
^
Signed-off-by: Paul A. Clarke <pc@us.ibm.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Link: http://lkml.kernel.org/r/c2b18d9c-17f8-9285-4868-f58b6359ccac@us.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-04-26 02:15:49 +08:00
|
|
|
const char *name,
|
|
|
|
enum symbol_tag_include includes)
|
perf symbols: Allow lookups by symbol name too
Configurable via symbol_conf.sort_by_name, so that the cost of an
extra rb_node on all 'struct symbol' instances is not paid by tools
that only want to decode addresses.
How to use it:
symbol_conf.sort_by_name = true;
symbol_init(&symbol_conf);
struct map *map = map_groups__find_by_name(kmaps, MAP__VARIABLE, "[kernel.kallsyms]");
if (map == NULL) {
pr_err("couldn't find map!\n");
kernel_maps__fprintf(stdout);
} else {
struct symbol *sym = map__find_symbol_by_name(map, sym_filter, NULL);
if (sym == NULL)
pr_err("couldn't find symbol %s!\n", sym_filter);
else
pr_info("symbol %s: %#Lx-%#Lx \n", sym_filter, sym->start, sym->end);
}
Looking over the vmlinux/kallsyms is common enough that I'll add a
variable to the upcoming struct perf_session to avoid the need to
use map_groups__find_by_name to get the main vmlinux/kallsyms map.
The above example looks on the 'variable' symtab, but it is just
like that for the functions one.
Also the sort operation is done when we first use
map__find_symbol_by_name, in a lazy way.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1260564622-12392-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-12-12 04:50:22 +08:00
|
|
|
{
|
|
|
|
struct rb_node *n;
|
2015-05-26 22:41:37 +08:00
|
|
|
struct symbol_name_rb_node *s = NULL;
|
perf symbols: Allow lookups by symbol name too
Configurable via symbol_conf.sort_by_name, so that the cost of an
extra rb_node on all 'struct symbol' instances is not paid by tools
that only want to decode addresses.
How to use it:
symbol_conf.sort_by_name = true;
symbol_init(&symbol_conf);
struct map *map = map_groups__find_by_name(kmaps, MAP__VARIABLE, "[kernel.kallsyms]");
if (map == NULL) {
pr_err("couldn't find map!\n");
kernel_maps__fprintf(stdout);
} else {
struct symbol *sym = map__find_symbol_by_name(map, sym_filter, NULL);
if (sym == NULL)
pr_err("couldn't find symbol %s!\n", sym_filter);
else
pr_info("symbol %s: %#Lx-%#Lx \n", sym_filter, sym->start, sym->end);
}
Looking over the vmlinux/kallsyms is common enough that I'll add a
variable to the upcoming struct perf_session to avoid the need to
use map_groups__find_by_name to get the main vmlinux/kallsyms map.
The above example looks on the 'variable' symtab, but it is just
like that for the functions one.
Also the sort operation is done when we first use
map__find_symbol_by_name, in a lazy way.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1260564622-12392-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-12-12 04:50:22 +08:00
|
|
|
|
2011-03-31 21:56:28 +08:00
|
|
|
if (symbols == NULL)
|
perf symbols: Allow lookups by symbol name too
Configurable via symbol_conf.sort_by_name, so that the cost of an
extra rb_node on all 'struct symbol' instances is not paid by tools
that only want to decode addresses.
How to use it:
symbol_conf.sort_by_name = true;
symbol_init(&symbol_conf);
struct map *map = map_groups__find_by_name(kmaps, MAP__VARIABLE, "[kernel.kallsyms]");
if (map == NULL) {
pr_err("couldn't find map!\n");
kernel_maps__fprintf(stdout);
} else {
struct symbol *sym = map__find_symbol_by_name(map, sym_filter, NULL);
if (sym == NULL)
pr_err("couldn't find symbol %s!\n", sym_filter);
else
pr_info("symbol %s: %#Lx-%#Lx \n", sym_filter, sym->start, sym->end);
}
Looking over the vmlinux/kallsyms is common enough that I'll add a
variable to the upcoming struct perf_session to avoid the need to
use map_groups__find_by_name to get the main vmlinux/kallsyms map.
The above example looks on the 'variable' symtab, but it is just
like that for the functions one.
Also the sort operation is done when we first use
map__find_symbol_by_name, in a lazy way.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1260564622-12392-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-12-12 04:50:22 +08:00
|
|
|
return NULL;
|
|
|
|
|
2018-12-07 03:18:17 +08:00
|
|
|
n = symbols->rb_root.rb_node;
|
perf symbols: Allow lookups by symbol name too
Configurable via symbol_conf.sort_by_name, so that the cost of an
extra rb_node on all 'struct symbol' instances is not paid by tools
that only want to decode addresses.
How to use it:
symbol_conf.sort_by_name = true;
symbol_init(&symbol_conf);
struct map *map = map_groups__find_by_name(kmaps, MAP__VARIABLE, "[kernel.kallsyms]");
if (map == NULL) {
pr_err("couldn't find map!\n");
kernel_maps__fprintf(stdout);
} else {
struct symbol *sym = map__find_symbol_by_name(map, sym_filter, NULL);
if (sym == NULL)
pr_err("couldn't find symbol %s!\n", sym_filter);
else
pr_info("symbol %s: %#Lx-%#Lx \n", sym_filter, sym->start, sym->end);
}
Looking over the vmlinux/kallsyms is common enough that I'll add a
variable to the upcoming struct perf_session to avoid the need to
use map_groups__find_by_name to get the main vmlinux/kallsyms map.
The above example looks on the 'variable' symtab, but it is just
like that for the functions one.
Also the sort operation is done when we first use
map__find_symbol_by_name, in a lazy way.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1260564622-12392-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-12-12 04:50:22 +08:00
|
|
|
|
|
|
|
while (n) {
|
|
|
|
int cmp;
|
|
|
|
|
|
|
|
s = rb_entry(n, struct symbol_name_rb_node, rb_node);
|
perf symbols: Allow user probes on versioned symbols
Symbol versioning, as in glibc, results in symbols being defined as:
<real symbol>@[@]<version>
(Note that "@@" identifies a default symbol, if the symbol name is
repeated.)
perf is currently unable to deal with this, and is unable to create user
probes at such symbols:
--
$ nm /lib/powerpc64le-linux-gnu/libpthread.so.0 | grep pthread_create
0000000000008d30 t __pthread_create_2_1
0000000000008d30 T pthread_create@@GLIBC_2.17
$ /usr/bin/sudo perf probe -v -x /lib/powerpc64le-linux-gnu/libpthread.so.0 pthread_create
probe-definition(0): pthread_create
symbol:pthread_create file:(null) line:0 offset:0 return:0 lazy:(null)
0 arguments
Open Debuginfo file: /usr/lib/debug/lib/powerpc64le-linux-gnu/libpthread-2.19.so
Try to find probe point from debuginfo.
Probe point 'pthread_create' not found.
Error: Failed to add events. Reason: No such file or directory (Code: -2)
--
One is not able to specify the fully versioned symbol, either, due to
syntactic conflicts with other uses of "@" by perf:
--
$ /usr/bin/sudo perf probe -v -x /lib/powerpc64le-linux-gnu/libpthread.so.0 pthread_create@@GLIBC_2.17
probe-definition(0): pthread_create@@GLIBC_2.17
Semantic error :SRC@SRC is not allowed.
0 arguments
Error: Command Parse Error. Reason: Invalid argument (Code: -22)
--
This patch ignores versioning for default symbols, thus allowing probes to be
created for these symbols:
--
$ /usr/bin/sudo ./perf probe -x /lib/powerpc64le-linux-gnu/libpthread.so.0 pthread_create
Added new event:
probe_libpthread:pthread_create (on pthread_create in /lib/powerpc64le-linux-gnu/libpthread-2.19.so)
You can now use it in all perf tools, such as:
perf record -e probe_libpthread:pthread_create -aR sleep 1
$ /usr/bin/sudo ./perf record -e probe_libpthread:pthread_create -aR ./test 2
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.052 MB perf.data (2 samples) ]
$ /usr/bin/sudo ./perf script
test 2915 [000] 19124.260729: probe_libpthread:pthread_create: (3fff99248d38)
test 2916 [000] 19124.260962: probe_libpthread:pthread_create: (3fff99248d38)
$ /usr/bin/sudo ./perf probe --del=probe_libpthread:pthread_create
Removed event: probe_libpthread:pthread_create
--
Committer note:
Change the variable storing the result of strlen() to 'int', to fix the build
on debian:experimental-x-mipsel, fedora:24-x-ARC-uClibc, ubuntu:16.04-x-arm,
etc:
util/symbol.c: In function 'symbol__match_symbol_name':
util/symbol.c:422:11: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]
if (len < versioning - name)
^
Signed-off-by: Paul A. Clarke <pc@us.ibm.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Link: http://lkml.kernel.org/r/c2b18d9c-17f8-9285-4868-f58b6359ccac@us.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-04-26 02:15:49 +08:00
|
|
|
cmp = symbol__match_symbol_name(s->sym.name, name, includes);
|
perf symbols: Allow lookups by symbol name too
Configurable via symbol_conf.sort_by_name, so that the cost of an
extra rb_node on all 'struct symbol' instances is not paid by tools
that only want to decode addresses.
How to use it:
symbol_conf.sort_by_name = true;
symbol_init(&symbol_conf);
struct map *map = map_groups__find_by_name(kmaps, MAP__VARIABLE, "[kernel.kallsyms]");
if (map == NULL) {
pr_err("couldn't find map!\n");
kernel_maps__fprintf(stdout);
} else {
struct symbol *sym = map__find_symbol_by_name(map, sym_filter, NULL);
if (sym == NULL)
pr_err("couldn't find symbol %s!\n", sym_filter);
else
pr_info("symbol %s: %#Lx-%#Lx \n", sym_filter, sym->start, sym->end);
}
Looking over the vmlinux/kallsyms is common enough that I'll add a
variable to the upcoming struct perf_session to avoid the need to
use map_groups__find_by_name to get the main vmlinux/kallsyms map.
The above example looks on the 'variable' symtab, but it is just
like that for the functions one.
Also the sort operation is done when we first use
map__find_symbol_by_name, in a lazy way.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1260564622-12392-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-12-12 04:50:22 +08:00
|
|
|
|
perf symbols: Allow user probes on versioned symbols
Symbol versioning, as in glibc, results in symbols being defined as:
<real symbol>@[@]<version>
(Note that "@@" identifies a default symbol, if the symbol name is
repeated.)
perf is currently unable to deal with this, and is unable to create user
probes at such symbols:
--
$ nm /lib/powerpc64le-linux-gnu/libpthread.so.0 | grep pthread_create
0000000000008d30 t __pthread_create_2_1
0000000000008d30 T pthread_create@@GLIBC_2.17
$ /usr/bin/sudo perf probe -v -x /lib/powerpc64le-linux-gnu/libpthread.so.0 pthread_create
probe-definition(0): pthread_create
symbol:pthread_create file:(null) line:0 offset:0 return:0 lazy:(null)
0 arguments
Open Debuginfo file: /usr/lib/debug/lib/powerpc64le-linux-gnu/libpthread-2.19.so
Try to find probe point from debuginfo.
Probe point 'pthread_create' not found.
Error: Failed to add events. Reason: No such file or directory (Code: -2)
--
One is not able to specify the fully versioned symbol, either, due to
syntactic conflicts with other uses of "@" by perf:
--
$ /usr/bin/sudo perf probe -v -x /lib/powerpc64le-linux-gnu/libpthread.so.0 pthread_create@@GLIBC_2.17
probe-definition(0): pthread_create@@GLIBC_2.17
Semantic error :SRC@SRC is not allowed.
0 arguments
Error: Command Parse Error. Reason: Invalid argument (Code: -22)
--
This patch ignores versioning for default symbols, thus allowing probes to be
created for these symbols:
--
$ /usr/bin/sudo ./perf probe -x /lib/powerpc64le-linux-gnu/libpthread.so.0 pthread_create
Added new event:
probe_libpthread:pthread_create (on pthread_create in /lib/powerpc64le-linux-gnu/libpthread-2.19.so)
You can now use it in all perf tools, such as:
perf record -e probe_libpthread:pthread_create -aR sleep 1
$ /usr/bin/sudo ./perf record -e probe_libpthread:pthread_create -aR ./test 2
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.052 MB perf.data (2 samples) ]
$ /usr/bin/sudo ./perf script
test 2915 [000] 19124.260729: probe_libpthread:pthread_create: (3fff99248d38)
test 2916 [000] 19124.260962: probe_libpthread:pthread_create: (3fff99248d38)
$ /usr/bin/sudo ./perf probe --del=probe_libpthread:pthread_create
Removed event: probe_libpthread:pthread_create
--
Committer note:
Change the variable storing the result of strlen() to 'int', to fix the build
on debian:experimental-x-mipsel, fedora:24-x-ARC-uClibc, ubuntu:16.04-x-arm,
etc:
util/symbol.c: In function 'symbol__match_symbol_name':
util/symbol.c:422:11: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]
if (len < versioning - name)
^
Signed-off-by: Paul A. Clarke <pc@us.ibm.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Link: http://lkml.kernel.org/r/c2b18d9c-17f8-9285-4868-f58b6359ccac@us.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-04-26 02:15:49 +08:00
|
|
|
if (cmp > 0)
|
perf symbols: Allow lookups by symbol name too
Configurable via symbol_conf.sort_by_name, so that the cost of an
extra rb_node on all 'struct symbol' instances is not paid by tools
that only want to decode addresses.
How to use it:
symbol_conf.sort_by_name = true;
symbol_init(&symbol_conf);
struct map *map = map_groups__find_by_name(kmaps, MAP__VARIABLE, "[kernel.kallsyms]");
if (map == NULL) {
pr_err("couldn't find map!\n");
kernel_maps__fprintf(stdout);
} else {
struct symbol *sym = map__find_symbol_by_name(map, sym_filter, NULL);
if (sym == NULL)
pr_err("couldn't find symbol %s!\n", sym_filter);
else
pr_info("symbol %s: %#Lx-%#Lx \n", sym_filter, sym->start, sym->end);
}
Looking over the vmlinux/kallsyms is common enough that I'll add a
variable to the upcoming struct perf_session to avoid the need to
use map_groups__find_by_name to get the main vmlinux/kallsyms map.
The above example looks on the 'variable' symtab, but it is just
like that for the functions one.
Also the sort operation is done when we first use
map__find_symbol_by_name, in a lazy way.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1260564622-12392-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-12-12 04:50:22 +08:00
|
|
|
n = n->rb_left;
|
perf symbols: Allow user probes on versioned symbols
Symbol versioning, as in glibc, results in symbols being defined as:
<real symbol>@[@]<version>
(Note that "@@" identifies a default symbol, if the symbol name is
repeated.)
perf is currently unable to deal with this, and is unable to create user
probes at such symbols:
--
$ nm /lib/powerpc64le-linux-gnu/libpthread.so.0 | grep pthread_create
0000000000008d30 t __pthread_create_2_1
0000000000008d30 T pthread_create@@GLIBC_2.17
$ /usr/bin/sudo perf probe -v -x /lib/powerpc64le-linux-gnu/libpthread.so.0 pthread_create
probe-definition(0): pthread_create
symbol:pthread_create file:(null) line:0 offset:0 return:0 lazy:(null)
0 arguments
Open Debuginfo file: /usr/lib/debug/lib/powerpc64le-linux-gnu/libpthread-2.19.so
Try to find probe point from debuginfo.
Probe point 'pthread_create' not found.
Error: Failed to add events. Reason: No such file or directory (Code: -2)
--
One is not able to specify the fully versioned symbol, either, due to
syntactic conflicts with other uses of "@" by perf:
--
$ /usr/bin/sudo perf probe -v -x /lib/powerpc64le-linux-gnu/libpthread.so.0 pthread_create@@GLIBC_2.17
probe-definition(0): pthread_create@@GLIBC_2.17
Semantic error :SRC@SRC is not allowed.
0 arguments
Error: Command Parse Error. Reason: Invalid argument (Code: -22)
--
This patch ignores versioning for default symbols, thus allowing probes to be
created for these symbols:
--
$ /usr/bin/sudo ./perf probe -x /lib/powerpc64le-linux-gnu/libpthread.so.0 pthread_create
Added new event:
probe_libpthread:pthread_create (on pthread_create in /lib/powerpc64le-linux-gnu/libpthread-2.19.so)
You can now use it in all perf tools, such as:
perf record -e probe_libpthread:pthread_create -aR sleep 1
$ /usr/bin/sudo ./perf record -e probe_libpthread:pthread_create -aR ./test 2
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.052 MB perf.data (2 samples) ]
$ /usr/bin/sudo ./perf script
test 2915 [000] 19124.260729: probe_libpthread:pthread_create: (3fff99248d38)
test 2916 [000] 19124.260962: probe_libpthread:pthread_create: (3fff99248d38)
$ /usr/bin/sudo ./perf probe --del=probe_libpthread:pthread_create
Removed event: probe_libpthread:pthread_create
--
Committer note:
Change the variable storing the result of strlen() to 'int', to fix the build
on debian:experimental-x-mipsel, fedora:24-x-ARC-uClibc, ubuntu:16.04-x-arm,
etc:
util/symbol.c: In function 'symbol__match_symbol_name':
util/symbol.c:422:11: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]
if (len < versioning - name)
^
Signed-off-by: Paul A. Clarke <pc@us.ibm.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Link: http://lkml.kernel.org/r/c2b18d9c-17f8-9285-4868-f58b6359ccac@us.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-04-26 02:15:49 +08:00
|
|
|
else if (cmp < 0)
|
perf symbols: Allow lookups by symbol name too
Configurable via symbol_conf.sort_by_name, so that the cost of an
extra rb_node on all 'struct symbol' instances is not paid by tools
that only want to decode addresses.
How to use it:
symbol_conf.sort_by_name = true;
symbol_init(&symbol_conf);
struct map *map = map_groups__find_by_name(kmaps, MAP__VARIABLE, "[kernel.kallsyms]");
if (map == NULL) {
pr_err("couldn't find map!\n");
kernel_maps__fprintf(stdout);
} else {
struct symbol *sym = map__find_symbol_by_name(map, sym_filter, NULL);
if (sym == NULL)
pr_err("couldn't find symbol %s!\n", sym_filter);
else
pr_info("symbol %s: %#Lx-%#Lx \n", sym_filter, sym->start, sym->end);
}
Looking over the vmlinux/kallsyms is common enough that I'll add a
variable to the upcoming struct perf_session to avoid the need to
use map_groups__find_by_name to get the main vmlinux/kallsyms map.
The above example looks on the 'variable' symtab, but it is just
like that for the functions one.
Also the sort operation is done when we first use
map__find_symbol_by_name, in a lazy way.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1260564622-12392-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-12-12 04:50:22 +08:00
|
|
|
n = n->rb_right;
|
|
|
|
else
|
2015-01-17 02:31:28 +08:00
|
|
|
break;
|
perf symbols: Allow lookups by symbol name too
Configurable via symbol_conf.sort_by_name, so that the cost of an
extra rb_node on all 'struct symbol' instances is not paid by tools
that only want to decode addresses.
How to use it:
symbol_conf.sort_by_name = true;
symbol_init(&symbol_conf);
struct map *map = map_groups__find_by_name(kmaps, MAP__VARIABLE, "[kernel.kallsyms]");
if (map == NULL) {
pr_err("couldn't find map!\n");
kernel_maps__fprintf(stdout);
} else {
struct symbol *sym = map__find_symbol_by_name(map, sym_filter, NULL);
if (sym == NULL)
pr_err("couldn't find symbol %s!\n", sym_filter);
else
pr_info("symbol %s: %#Lx-%#Lx \n", sym_filter, sym->start, sym->end);
}
Looking over the vmlinux/kallsyms is common enough that I'll add a
variable to the upcoming struct perf_session to avoid the need to
use map_groups__find_by_name to get the main vmlinux/kallsyms map.
The above example looks on the 'variable' symtab, but it is just
like that for the functions one.
Also the sort operation is done when we first use
map__find_symbol_by_name, in a lazy way.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1260564622-12392-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-12-12 04:50:22 +08:00
|
|
|
}
|
|
|
|
|
2015-01-17 02:31:28 +08:00
|
|
|
if (n == NULL)
|
|
|
|
return NULL;
|
|
|
|
|
perf symbols: Allow user probes on versioned symbols
Symbol versioning, as in glibc, results in symbols being defined as:
<real symbol>@[@]<version>
(Note that "@@" identifies a default symbol, if the symbol name is
repeated.)
perf is currently unable to deal with this, and is unable to create user
probes at such symbols:
--
$ nm /lib/powerpc64le-linux-gnu/libpthread.so.0 | grep pthread_create
0000000000008d30 t __pthread_create_2_1
0000000000008d30 T pthread_create@@GLIBC_2.17
$ /usr/bin/sudo perf probe -v -x /lib/powerpc64le-linux-gnu/libpthread.so.0 pthread_create
probe-definition(0): pthread_create
symbol:pthread_create file:(null) line:0 offset:0 return:0 lazy:(null)
0 arguments
Open Debuginfo file: /usr/lib/debug/lib/powerpc64le-linux-gnu/libpthread-2.19.so
Try to find probe point from debuginfo.
Probe point 'pthread_create' not found.
Error: Failed to add events. Reason: No such file or directory (Code: -2)
--
One is not able to specify the fully versioned symbol, either, due to
syntactic conflicts with other uses of "@" by perf:
--
$ /usr/bin/sudo perf probe -v -x /lib/powerpc64le-linux-gnu/libpthread.so.0 pthread_create@@GLIBC_2.17
probe-definition(0): pthread_create@@GLIBC_2.17
Semantic error :SRC@SRC is not allowed.
0 arguments
Error: Command Parse Error. Reason: Invalid argument (Code: -22)
--
This patch ignores versioning for default symbols, thus allowing probes to be
created for these symbols:
--
$ /usr/bin/sudo ./perf probe -x /lib/powerpc64le-linux-gnu/libpthread.so.0 pthread_create
Added new event:
probe_libpthread:pthread_create (on pthread_create in /lib/powerpc64le-linux-gnu/libpthread-2.19.so)
You can now use it in all perf tools, such as:
perf record -e probe_libpthread:pthread_create -aR sleep 1
$ /usr/bin/sudo ./perf record -e probe_libpthread:pthread_create -aR ./test 2
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.052 MB perf.data (2 samples) ]
$ /usr/bin/sudo ./perf script
test 2915 [000] 19124.260729: probe_libpthread:pthread_create: (3fff99248d38)
test 2916 [000] 19124.260962: probe_libpthread:pthread_create: (3fff99248d38)
$ /usr/bin/sudo ./perf probe --del=probe_libpthread:pthread_create
Removed event: probe_libpthread:pthread_create
--
Committer note:
Change the variable storing the result of strlen() to 'int', to fix the build
on debian:experimental-x-mipsel, fedora:24-x-ARC-uClibc, ubuntu:16.04-x-arm,
etc:
util/symbol.c: In function 'symbol__match_symbol_name':
util/symbol.c:422:11: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]
if (len < versioning - name)
^
Signed-off-by: Paul A. Clarke <pc@us.ibm.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Link: http://lkml.kernel.org/r/c2b18d9c-17f8-9285-4868-f58b6359ccac@us.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-04-26 02:15:49 +08:00
|
|
|
if (includes != SYMBOL_TAG_INCLUDE__DEFAULT_ONLY)
|
|
|
|
/* return first symbol that has same name (if any) */
|
|
|
|
for (n = rb_prev(n); n; n = rb_prev(n)) {
|
|
|
|
struct symbol_name_rb_node *tmp;
|
2015-01-17 02:31:28 +08:00
|
|
|
|
perf symbols: Allow user probes on versioned symbols
Symbol versioning, as in glibc, results in symbols being defined as:
<real symbol>@[@]<version>
(Note that "@@" identifies a default symbol, if the symbol name is
repeated.)
perf is currently unable to deal with this, and is unable to create user
probes at such symbols:
--
$ nm /lib/powerpc64le-linux-gnu/libpthread.so.0 | grep pthread_create
0000000000008d30 t __pthread_create_2_1
0000000000008d30 T pthread_create@@GLIBC_2.17
$ /usr/bin/sudo perf probe -v -x /lib/powerpc64le-linux-gnu/libpthread.so.0 pthread_create
probe-definition(0): pthread_create
symbol:pthread_create file:(null) line:0 offset:0 return:0 lazy:(null)
0 arguments
Open Debuginfo file: /usr/lib/debug/lib/powerpc64le-linux-gnu/libpthread-2.19.so
Try to find probe point from debuginfo.
Probe point 'pthread_create' not found.
Error: Failed to add events. Reason: No such file or directory (Code: -2)
--
One is not able to specify the fully versioned symbol, either, due to
syntactic conflicts with other uses of "@" by perf:
--
$ /usr/bin/sudo perf probe -v -x /lib/powerpc64le-linux-gnu/libpthread.so.0 pthread_create@@GLIBC_2.17
probe-definition(0): pthread_create@@GLIBC_2.17
Semantic error :SRC@SRC is not allowed.
0 arguments
Error: Command Parse Error. Reason: Invalid argument (Code: -22)
--
This patch ignores versioning for default symbols, thus allowing probes to be
created for these symbols:
--
$ /usr/bin/sudo ./perf probe -x /lib/powerpc64le-linux-gnu/libpthread.so.0 pthread_create
Added new event:
probe_libpthread:pthread_create (on pthread_create in /lib/powerpc64le-linux-gnu/libpthread-2.19.so)
You can now use it in all perf tools, such as:
perf record -e probe_libpthread:pthread_create -aR sleep 1
$ /usr/bin/sudo ./perf record -e probe_libpthread:pthread_create -aR ./test 2
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.052 MB perf.data (2 samples) ]
$ /usr/bin/sudo ./perf script
test 2915 [000] 19124.260729: probe_libpthread:pthread_create: (3fff99248d38)
test 2916 [000] 19124.260962: probe_libpthread:pthread_create: (3fff99248d38)
$ /usr/bin/sudo ./perf probe --del=probe_libpthread:pthread_create
Removed event: probe_libpthread:pthread_create
--
Committer note:
Change the variable storing the result of strlen() to 'int', to fix the build
on debian:experimental-x-mipsel, fedora:24-x-ARC-uClibc, ubuntu:16.04-x-arm,
etc:
util/symbol.c: In function 'symbol__match_symbol_name':
util/symbol.c:422:11: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]
if (len < versioning - name)
^
Signed-off-by: Paul A. Clarke <pc@us.ibm.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Link: http://lkml.kernel.org/r/c2b18d9c-17f8-9285-4868-f58b6359ccac@us.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-04-26 02:15:49 +08:00
|
|
|
tmp = rb_entry(n, struct symbol_name_rb_node, rb_node);
|
|
|
|
if (arch__compare_symbol_names(tmp->sym.name, s->sym.name))
|
|
|
|
break;
|
2015-01-17 02:31:28 +08:00
|
|
|
|
perf symbols: Allow user probes on versioned symbols
Symbol versioning, as in glibc, results in symbols being defined as:
<real symbol>@[@]<version>
(Note that "@@" identifies a default symbol, if the symbol name is
repeated.)
perf is currently unable to deal with this, and is unable to create user
probes at such symbols:
--
$ nm /lib/powerpc64le-linux-gnu/libpthread.so.0 | grep pthread_create
0000000000008d30 t __pthread_create_2_1
0000000000008d30 T pthread_create@@GLIBC_2.17
$ /usr/bin/sudo perf probe -v -x /lib/powerpc64le-linux-gnu/libpthread.so.0 pthread_create
probe-definition(0): pthread_create
symbol:pthread_create file:(null) line:0 offset:0 return:0 lazy:(null)
0 arguments
Open Debuginfo file: /usr/lib/debug/lib/powerpc64le-linux-gnu/libpthread-2.19.so
Try to find probe point from debuginfo.
Probe point 'pthread_create' not found.
Error: Failed to add events. Reason: No such file or directory (Code: -2)
--
One is not able to specify the fully versioned symbol, either, due to
syntactic conflicts with other uses of "@" by perf:
--
$ /usr/bin/sudo perf probe -v -x /lib/powerpc64le-linux-gnu/libpthread.so.0 pthread_create@@GLIBC_2.17
probe-definition(0): pthread_create@@GLIBC_2.17
Semantic error :SRC@SRC is not allowed.
0 arguments
Error: Command Parse Error. Reason: Invalid argument (Code: -22)
--
This patch ignores versioning for default symbols, thus allowing probes to be
created for these symbols:
--
$ /usr/bin/sudo ./perf probe -x /lib/powerpc64le-linux-gnu/libpthread.so.0 pthread_create
Added new event:
probe_libpthread:pthread_create (on pthread_create in /lib/powerpc64le-linux-gnu/libpthread-2.19.so)
You can now use it in all perf tools, such as:
perf record -e probe_libpthread:pthread_create -aR sleep 1
$ /usr/bin/sudo ./perf record -e probe_libpthread:pthread_create -aR ./test 2
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.052 MB perf.data (2 samples) ]
$ /usr/bin/sudo ./perf script
test 2915 [000] 19124.260729: probe_libpthread:pthread_create: (3fff99248d38)
test 2916 [000] 19124.260962: probe_libpthread:pthread_create: (3fff99248d38)
$ /usr/bin/sudo ./perf probe --del=probe_libpthread:pthread_create
Removed event: probe_libpthread:pthread_create
--
Committer note:
Change the variable storing the result of strlen() to 'int', to fix the build
on debian:experimental-x-mipsel, fedora:24-x-ARC-uClibc, ubuntu:16.04-x-arm,
etc:
util/symbol.c: In function 'symbol__match_symbol_name':
util/symbol.c:422:11: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]
if (len < versioning - name)
^
Signed-off-by: Paul A. Clarke <pc@us.ibm.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Link: http://lkml.kernel.org/r/c2b18d9c-17f8-9285-4868-f58b6359ccac@us.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-04-26 02:15:49 +08:00
|
|
|
s = tmp;
|
|
|
|
}
|
2015-01-17 02:31:28 +08:00
|
|
|
|
|
|
|
return &s->sym;
|
perf symbols: Allow lookups by symbol name too
Configurable via symbol_conf.sort_by_name, so that the cost of an
extra rb_node on all 'struct symbol' instances is not paid by tools
that only want to decode addresses.
How to use it:
symbol_conf.sort_by_name = true;
symbol_init(&symbol_conf);
struct map *map = map_groups__find_by_name(kmaps, MAP__VARIABLE, "[kernel.kallsyms]");
if (map == NULL) {
pr_err("couldn't find map!\n");
kernel_maps__fprintf(stdout);
} else {
struct symbol *sym = map__find_symbol_by_name(map, sym_filter, NULL);
if (sym == NULL)
pr_err("couldn't find symbol %s!\n", sym_filter);
else
pr_info("symbol %s: %#Lx-%#Lx \n", sym_filter, sym->start, sym->end);
}
Looking over the vmlinux/kallsyms is common enough that I'll add a
variable to the upcoming struct perf_session to avoid the need to
use map_groups__find_by_name to get the main vmlinux/kallsyms map.
The above example looks on the 'variable' symtab, but it is just
like that for the functions one.
Also the sort operation is done when we first use
map__find_symbol_by_name, in a lazy way.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1260564622-12392-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-12-12 04:50:22 +08:00
|
|
|
}
|
|
|
|
|
2015-08-25 00:33:14 +08:00
|
|
|
void dso__reset_find_symbol_cache(struct dso *dso)
|
|
|
|
{
|
2018-04-27 03:52:34 +08:00
|
|
|
dso->last_find_result.addr = 0;
|
|
|
|
dso->last_find_result.symbol = NULL;
|
2015-08-25 00:33:14 +08:00
|
|
|
}
|
|
|
|
|
2018-04-27 03:52:34 +08:00
|
|
|
void dso__insert_symbol(struct dso *dso, struct symbol *sym)
|
2016-05-11 11:26:46 +08:00
|
|
|
{
|
2018-04-27 03:52:34 +08:00
|
|
|
__symbols__insert(&dso->symbols, sym, dso->kernel);
|
2016-05-11 11:26:46 +08:00
|
|
|
|
|
|
|
/* update the symbol cache if necessary */
|
2018-04-27 03:52:34 +08:00
|
|
|
if (dso->last_find_result.addr >= sym->start &&
|
|
|
|
(dso->last_find_result.addr < sym->end ||
|
2016-05-11 11:26:46 +08:00
|
|
|
sym->start == sym->end)) {
|
2018-04-27 03:52:34 +08:00
|
|
|
dso->last_find_result.symbol = sym;
|
2016-05-11 11:26:46 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-10-16 19:47:18 +08:00
|
|
|
void dso__delete_symbol(struct dso *dso, struct symbol *sym)
|
|
|
|
{
|
|
|
|
rb_erase_cached(&sym->rb_node, &dso->symbols);
|
|
|
|
symbol__delete(sym);
|
|
|
|
dso__reset_find_symbol_cache(dso);
|
|
|
|
}
|
|
|
|
|
2018-04-27 03:52:34 +08:00
|
|
|
struct symbol *dso__find_symbol(struct dso *dso, u64 addr)
|
2009-11-24 23:01:52 +08:00
|
|
|
{
|
2018-04-27 03:52:34 +08:00
|
|
|
if (dso->last_find_result.addr != addr || dso->last_find_result.symbol == NULL) {
|
|
|
|
dso->last_find_result.addr = addr;
|
|
|
|
dso->last_find_result.symbol = symbols__find(&dso->symbols, addr);
|
2015-07-22 23:52:17 +08:00
|
|
|
}
|
|
|
|
|
2018-04-27 03:52:34 +08:00
|
|
|
return dso->last_find_result.symbol;
|
2014-07-14 18:02:50 +08:00
|
|
|
}
|
|
|
|
|
2023-01-20 20:34:50 +08:00
|
|
|
struct symbol *dso__find_symbol_nocache(struct dso *dso, u64 addr)
|
|
|
|
{
|
|
|
|
return symbols__find(&dso->symbols, addr);
|
|
|
|
}
|
|
|
|
|
2018-04-26 04:01:46 +08:00
|
|
|
struct symbol *dso__first_symbol(struct dso *dso)
|
|
|
|
{
|
2018-04-27 03:52:34 +08:00
|
|
|
return symbols__first(&dso->symbols);
|
2016-09-23 22:38:38 +08:00
|
|
|
}
|
|
|
|
|
2018-04-26 04:01:46 +08:00
|
|
|
struct symbol *dso__last_symbol(struct dso *dso)
|
|
|
|
{
|
2018-04-27 03:52:34 +08:00
|
|
|
return symbols__last(&dso->symbols);
|
2018-04-26 04:01:46 +08:00
|
|
|
}
|
|
|
|
|
2014-07-14 18:02:50 +08:00
|
|
|
struct symbol *dso__next_symbol(struct symbol *sym)
|
|
|
|
{
|
|
|
|
return symbols__next(sym);
|
2013-08-07 19:38:51 +08:00
|
|
|
}
|
|
|
|
|
2015-01-17 02:39:53 +08:00
|
|
|
struct symbol *symbol__next_by_name(struct symbol *sym)
|
|
|
|
{
|
|
|
|
struct symbol_name_rb_node *s = container_of(sym, struct symbol_name_rb_node, sym);
|
|
|
|
struct rb_node *n = rb_next(&s->rb_node);
|
|
|
|
|
|
|
|
return n ? &rb_entry(n, struct symbol_name_rb_node, rb_node)->sym : NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2018-04-26 04:46:28 +08:00
|
|
|
* Returns first symbol that matched with @name.
|
2015-01-17 02:39:53 +08:00
|
|
|
*/
|
2018-04-27 03:52:34 +08:00
|
|
|
struct symbol *dso__find_symbol_by_name(struct dso *dso, const char *name)
|
perf symbols: Allow lookups by symbol name too
Configurable via symbol_conf.sort_by_name, so that the cost of an
extra rb_node on all 'struct symbol' instances is not paid by tools
that only want to decode addresses.
How to use it:
symbol_conf.sort_by_name = true;
symbol_init(&symbol_conf);
struct map *map = map_groups__find_by_name(kmaps, MAP__VARIABLE, "[kernel.kallsyms]");
if (map == NULL) {
pr_err("couldn't find map!\n");
kernel_maps__fprintf(stdout);
} else {
struct symbol *sym = map__find_symbol_by_name(map, sym_filter, NULL);
if (sym == NULL)
pr_err("couldn't find symbol %s!\n", sym_filter);
else
pr_info("symbol %s: %#Lx-%#Lx \n", sym_filter, sym->start, sym->end);
}
Looking over the vmlinux/kallsyms is common enough that I'll add a
variable to the upcoming struct perf_session to avoid the need to
use map_groups__find_by_name to get the main vmlinux/kallsyms map.
The above example looks on the 'variable' symtab, but it is just
like that for the functions one.
Also the sort operation is done when we first use
map__find_symbol_by_name, in a lazy way.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1260564622-12392-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-12-12 04:50:22 +08:00
|
|
|
{
|
2018-04-27 03:52:34 +08:00
|
|
|
struct symbol *s = symbols__find_by_name(&dso->symbol_names, name,
|
perf symbols: Allow user probes on versioned symbols
Symbol versioning, as in glibc, results in symbols being defined as:
<real symbol>@[@]<version>
(Note that "@@" identifies a default symbol, if the symbol name is
repeated.)
perf is currently unable to deal with this, and is unable to create user
probes at such symbols:
--
$ nm /lib/powerpc64le-linux-gnu/libpthread.so.0 | grep pthread_create
0000000000008d30 t __pthread_create_2_1
0000000000008d30 T pthread_create@@GLIBC_2.17
$ /usr/bin/sudo perf probe -v -x /lib/powerpc64le-linux-gnu/libpthread.so.0 pthread_create
probe-definition(0): pthread_create
symbol:pthread_create file:(null) line:0 offset:0 return:0 lazy:(null)
0 arguments
Open Debuginfo file: /usr/lib/debug/lib/powerpc64le-linux-gnu/libpthread-2.19.so
Try to find probe point from debuginfo.
Probe point 'pthread_create' not found.
Error: Failed to add events. Reason: No such file or directory (Code: -2)
--
One is not able to specify the fully versioned symbol, either, due to
syntactic conflicts with other uses of "@" by perf:
--
$ /usr/bin/sudo perf probe -v -x /lib/powerpc64le-linux-gnu/libpthread.so.0 pthread_create@@GLIBC_2.17
probe-definition(0): pthread_create@@GLIBC_2.17
Semantic error :SRC@SRC is not allowed.
0 arguments
Error: Command Parse Error. Reason: Invalid argument (Code: -22)
--
This patch ignores versioning for default symbols, thus allowing probes to be
created for these symbols:
--
$ /usr/bin/sudo ./perf probe -x /lib/powerpc64le-linux-gnu/libpthread.so.0 pthread_create
Added new event:
probe_libpthread:pthread_create (on pthread_create in /lib/powerpc64le-linux-gnu/libpthread-2.19.so)
You can now use it in all perf tools, such as:
perf record -e probe_libpthread:pthread_create -aR sleep 1
$ /usr/bin/sudo ./perf record -e probe_libpthread:pthread_create -aR ./test 2
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.052 MB perf.data (2 samples) ]
$ /usr/bin/sudo ./perf script
test 2915 [000] 19124.260729: probe_libpthread:pthread_create: (3fff99248d38)
test 2916 [000] 19124.260962: probe_libpthread:pthread_create: (3fff99248d38)
$ /usr/bin/sudo ./perf probe --del=probe_libpthread:pthread_create
Removed event: probe_libpthread:pthread_create
--
Committer note:
Change the variable storing the result of strlen() to 'int', to fix the build
on debian:experimental-x-mipsel, fedora:24-x-ARC-uClibc, ubuntu:16.04-x-arm,
etc:
util/symbol.c: In function 'symbol__match_symbol_name':
util/symbol.c:422:11: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]
if (len < versioning - name)
^
Signed-off-by: Paul A. Clarke <pc@us.ibm.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Link: http://lkml.kernel.org/r/c2b18d9c-17f8-9285-4868-f58b6359ccac@us.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-04-26 02:15:49 +08:00
|
|
|
SYMBOL_TAG_INCLUDE__NONE);
|
|
|
|
if (!s)
|
2018-04-27 03:52:34 +08:00
|
|
|
s = symbols__find_by_name(&dso->symbol_names, name,
|
perf symbols: Allow user probes on versioned symbols
Symbol versioning, as in glibc, results in symbols being defined as:
<real symbol>@[@]<version>
(Note that "@@" identifies a default symbol, if the symbol name is
repeated.)
perf is currently unable to deal with this, and is unable to create user
probes at such symbols:
--
$ nm /lib/powerpc64le-linux-gnu/libpthread.so.0 | grep pthread_create
0000000000008d30 t __pthread_create_2_1
0000000000008d30 T pthread_create@@GLIBC_2.17
$ /usr/bin/sudo perf probe -v -x /lib/powerpc64le-linux-gnu/libpthread.so.0 pthread_create
probe-definition(0): pthread_create
symbol:pthread_create file:(null) line:0 offset:0 return:0 lazy:(null)
0 arguments
Open Debuginfo file: /usr/lib/debug/lib/powerpc64le-linux-gnu/libpthread-2.19.so
Try to find probe point from debuginfo.
Probe point 'pthread_create' not found.
Error: Failed to add events. Reason: No such file or directory (Code: -2)
--
One is not able to specify the fully versioned symbol, either, due to
syntactic conflicts with other uses of "@" by perf:
--
$ /usr/bin/sudo perf probe -v -x /lib/powerpc64le-linux-gnu/libpthread.so.0 pthread_create@@GLIBC_2.17
probe-definition(0): pthread_create@@GLIBC_2.17
Semantic error :SRC@SRC is not allowed.
0 arguments
Error: Command Parse Error. Reason: Invalid argument (Code: -22)
--
This patch ignores versioning for default symbols, thus allowing probes to be
created for these symbols:
--
$ /usr/bin/sudo ./perf probe -x /lib/powerpc64le-linux-gnu/libpthread.so.0 pthread_create
Added new event:
probe_libpthread:pthread_create (on pthread_create in /lib/powerpc64le-linux-gnu/libpthread-2.19.so)
You can now use it in all perf tools, such as:
perf record -e probe_libpthread:pthread_create -aR sleep 1
$ /usr/bin/sudo ./perf record -e probe_libpthread:pthread_create -aR ./test 2
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.052 MB perf.data (2 samples) ]
$ /usr/bin/sudo ./perf script
test 2915 [000] 19124.260729: probe_libpthread:pthread_create: (3fff99248d38)
test 2916 [000] 19124.260962: probe_libpthread:pthread_create: (3fff99248d38)
$ /usr/bin/sudo ./perf probe --del=probe_libpthread:pthread_create
Removed event: probe_libpthread:pthread_create
--
Committer note:
Change the variable storing the result of strlen() to 'int', to fix the build
on debian:experimental-x-mipsel, fedora:24-x-ARC-uClibc, ubuntu:16.04-x-arm,
etc:
util/symbol.c: In function 'symbol__match_symbol_name':
util/symbol.c:422:11: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]
if (len < versioning - name)
^
Signed-off-by: Paul A. Clarke <pc@us.ibm.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Link: http://lkml.kernel.org/r/c2b18d9c-17f8-9285-4868-f58b6359ccac@us.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-04-26 02:15:49 +08:00
|
|
|
SYMBOL_TAG_INCLUDE__DEFAULT_ONLY);
|
|
|
|
return s;
|
perf symbols: Allow lookups by symbol name too
Configurable via symbol_conf.sort_by_name, so that the cost of an
extra rb_node on all 'struct symbol' instances is not paid by tools
that only want to decode addresses.
How to use it:
symbol_conf.sort_by_name = true;
symbol_init(&symbol_conf);
struct map *map = map_groups__find_by_name(kmaps, MAP__VARIABLE, "[kernel.kallsyms]");
if (map == NULL) {
pr_err("couldn't find map!\n");
kernel_maps__fprintf(stdout);
} else {
struct symbol *sym = map__find_symbol_by_name(map, sym_filter, NULL);
if (sym == NULL)
pr_err("couldn't find symbol %s!\n", sym_filter);
else
pr_info("symbol %s: %#Lx-%#Lx \n", sym_filter, sym->start, sym->end);
}
Looking over the vmlinux/kallsyms is common enough that I'll add a
variable to the upcoming struct perf_session to avoid the need to
use map_groups__find_by_name to get the main vmlinux/kallsyms map.
The above example looks on the 'variable' symtab, but it is just
like that for the functions one.
Also the sort operation is done when we first use
map__find_symbol_by_name, in a lazy way.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1260564622-12392-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-12-12 04:50:22 +08:00
|
|
|
}
|
|
|
|
|
2018-04-27 03:52:34 +08:00
|
|
|
void dso__sort_by_name(struct dso *dso)
|
perf symbols: Allow lookups by symbol name too
Configurable via symbol_conf.sort_by_name, so that the cost of an
extra rb_node on all 'struct symbol' instances is not paid by tools
that only want to decode addresses.
How to use it:
symbol_conf.sort_by_name = true;
symbol_init(&symbol_conf);
struct map *map = map_groups__find_by_name(kmaps, MAP__VARIABLE, "[kernel.kallsyms]");
if (map == NULL) {
pr_err("couldn't find map!\n");
kernel_maps__fprintf(stdout);
} else {
struct symbol *sym = map__find_symbol_by_name(map, sym_filter, NULL);
if (sym == NULL)
pr_err("couldn't find symbol %s!\n", sym_filter);
else
pr_info("symbol %s: %#Lx-%#Lx \n", sym_filter, sym->start, sym->end);
}
Looking over the vmlinux/kallsyms is common enough that I'll add a
variable to the upcoming struct perf_session to avoid the need to
use map_groups__find_by_name to get the main vmlinux/kallsyms map.
The above example looks on the 'variable' symtab, but it is just
like that for the functions one.
Also the sort operation is done when we first use
map__find_symbol_by_name, in a lazy way.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1260564622-12392-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-12-12 04:50:22 +08:00
|
|
|
{
|
2018-04-27 03:52:34 +08:00
|
|
|
dso__set_sorted_by_name(dso);
|
|
|
|
return symbols__sort_by_name(&dso->symbol_names, &dso->symbols);
|
perf symbols: Allow lookups by symbol name too
Configurable via symbol_conf.sort_by_name, so that the cost of an
extra rb_node on all 'struct symbol' instances is not paid by tools
that only want to decode addresses.
How to use it:
symbol_conf.sort_by_name = true;
symbol_init(&symbol_conf);
struct map *map = map_groups__find_by_name(kmaps, MAP__VARIABLE, "[kernel.kallsyms]");
if (map == NULL) {
pr_err("couldn't find map!\n");
kernel_maps__fprintf(stdout);
} else {
struct symbol *sym = map__find_symbol_by_name(map, sym_filter, NULL);
if (sym == NULL)
pr_err("couldn't find symbol %s!\n", sym_filter);
else
pr_info("symbol %s: %#Lx-%#Lx \n", sym_filter, sym->start, sym->end);
}
Looking over the vmlinux/kallsyms is common enough that I'll add a
variable to the upcoming struct perf_session to avoid the need to
use map_groups__find_by_name to get the main vmlinux/kallsyms map.
The above example looks on the 'variable' symtab, but it is just
like that for the functions one.
Also the sort operation is done when we first use
map__find_symbol_by_name, in a lazy way.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1260564622-12392-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-12-12 04:50:22 +08:00
|
|
|
}
|
|
|
|
|
2020-05-02 06:13:15 +08:00
|
|
|
/*
|
|
|
|
* While we find nice hex chars, build a long_val.
|
|
|
|
* Return number of chars processed.
|
|
|
|
*/
|
|
|
|
static int hex2u64(const char *ptr, u64 *long_val)
|
|
|
|
{
|
|
|
|
char *p;
|
|
|
|
|
|
|
|
*long_val = strtoull(ptr, &p, 16);
|
|
|
|
|
|
|
|
return p - ptr;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2013-10-08 16:45:48 +08:00
|
|
|
int modules__parse(const char *filename, void *arg,
|
|
|
|
int (*process_module)(void *arg, const char *name,
|
2017-08-03 21:49:02 +08:00
|
|
|
u64 start, u64 size))
|
2013-10-08 16:45:48 +08:00
|
|
|
{
|
|
|
|
char *line = NULL;
|
|
|
|
size_t n;
|
|
|
|
FILE *file;
|
|
|
|
int err = 0;
|
|
|
|
|
|
|
|
file = fopen(filename, "r");
|
|
|
|
if (file == NULL)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
while (1) {
|
|
|
|
char name[PATH_MAX];
|
2017-08-03 21:49:02 +08:00
|
|
|
u64 start, size;
|
|
|
|
char *sep, *endptr;
|
2013-10-08 16:45:48 +08:00
|
|
|
ssize_t line_len;
|
|
|
|
|
|
|
|
line_len = getline(&line, &n, file);
|
|
|
|
if (line_len < 0) {
|
|
|
|
if (feof(file))
|
|
|
|
break;
|
|
|
|
err = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!line) {
|
|
|
|
err = -1;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
line[--line_len] = '\0'; /* \n */
|
|
|
|
|
|
|
|
sep = strrchr(line, 'x');
|
|
|
|
if (sep == NULL)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
hex2u64(sep + 1, &start);
|
|
|
|
|
|
|
|
sep = strchr(line, ' ');
|
|
|
|
if (sep == NULL)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
*sep = '\0';
|
|
|
|
|
|
|
|
scnprintf(name, sizeof(name), "[%s]", line);
|
|
|
|
|
2017-08-03 21:49:02 +08:00
|
|
|
size = strtoul(sep + 1, &endptr, 0);
|
|
|
|
if (*endptr != ' ' && *endptr != '\t')
|
|
|
|
continue;
|
|
|
|
|
|
|
|
err = process_module(arg, name, start, size);
|
2013-10-08 16:45:48 +08:00
|
|
|
if (err)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
out:
|
|
|
|
free(line);
|
|
|
|
fclose(file);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2014-08-09 05:00:39 +08:00
|
|
|
/*
|
|
|
|
* These are symbols in the kernel image, so make sure that
|
|
|
|
* sym is from a kernel DSO.
|
|
|
|
*/
|
2016-09-02 04:54:31 +08:00
|
|
|
static bool symbol__is_idle(const char *name)
|
2013-11-19 04:32:45 +08:00
|
|
|
{
|
|
|
|
const char * const idle_symbols[] = {
|
2020-02-08 07:06:12 +08:00
|
|
|
"acpi_idle_do_entry",
|
|
|
|
"acpi_processor_ffh_cstate_enter",
|
2019-01-10 03:19:24 +08:00
|
|
|
"arch_cpu_idle",
|
2013-11-19 04:32:45 +08:00
|
|
|
"cpu_idle",
|
2014-08-09 05:02:41 +08:00
|
|
|
"cpu_startup_entry",
|
2020-02-08 07:06:12 +08:00
|
|
|
"idle_cpu",
|
2013-11-19 04:32:45 +08:00
|
|
|
"intel_idle",
|
|
|
|
"default_idle",
|
|
|
|
"native_safe_halt",
|
|
|
|
"enter_idle",
|
|
|
|
"exit_idle",
|
|
|
|
"mwait_idle",
|
|
|
|
"mwait_idle_with_hints",
|
2020-08-18 21:00:13 +08:00
|
|
|
"mwait_idle_with_hints.constprop.0",
|
2013-11-19 04:32:45 +08:00
|
|
|
"poll_idle",
|
|
|
|
"ppc64_runlatch_off",
|
|
|
|
"pseries_dedicated_idle_sleep",
|
2020-07-08 01:14:55 +08:00
|
|
|
"psw_idle",
|
|
|
|
"psw_idle_exit",
|
2013-11-19 04:32:45 +08:00
|
|
|
NULL
|
|
|
|
};
|
|
|
|
int i;
|
2020-02-11 00:31:47 +08:00
|
|
|
static struct strlist *idle_symbols_list;
|
2013-11-19 04:32:45 +08:00
|
|
|
|
2020-02-11 00:31:47 +08:00
|
|
|
if (idle_symbols_list)
|
|
|
|
return strlist__has_entry(idle_symbols_list, name);
|
|
|
|
|
|
|
|
idle_symbols_list = strlist__new(NULL, NULL);
|
|
|
|
|
|
|
|
for (i = 0; idle_symbols[i]; i++)
|
|
|
|
strlist__add(idle_symbols_list, idle_symbols[i]);
|
2013-11-19 04:32:45 +08:00
|
|
|
|
2020-02-11 00:31:47 +08:00
|
|
|
return strlist__has_entry(idle_symbols_list, name);
|
2013-11-19 04:32:45 +08:00
|
|
|
}
|
|
|
|
|
2010-01-05 02:19:26 +08:00
|
|
|
static int map__process_kallsym_symbol(void *arg, const char *name,
|
2012-08-11 06:22:48 +08:00
|
|
|
char type, u64 start)
|
2010-01-05 02:19:26 +08:00
|
|
|
{
|
|
|
|
struct symbol *sym;
|
2018-04-28 02:36:15 +08:00
|
|
|
struct dso *dso = arg;
|
2018-12-07 03:18:17 +08:00
|
|
|
struct rb_root_cached *root = &dso->symbols;
|
2010-01-05 02:19:26 +08:00
|
|
|
|
2018-04-27 03:52:34 +08:00
|
|
|
if (!symbol_type__filter(type))
|
2010-01-05 02:19:26 +08:00
|
|
|
return 0;
|
|
|
|
|
2021-10-29 14:50:37 +08:00
|
|
|
/* Ignore local symbols for ARM modules */
|
|
|
|
if (name[0] == '$')
|
|
|
|
return 0;
|
|
|
|
|
2012-08-11 06:22:48 +08:00
|
|
|
/*
|
|
|
|
* module symbols are not sorted so we add all
|
|
|
|
* symbols, setting length to 0, and rely on
|
|
|
|
* symbols__fixup_end() to fix it up.
|
|
|
|
*/
|
2018-04-26 22:09:10 +08:00
|
|
|
sym = symbol__new(start, 0, kallsyms2elf_binding(type), kallsyms2elf_type(type), name);
|
2010-01-05 02:19:26 +08:00
|
|
|
if (sym == NULL)
|
|
|
|
return -ENOMEM;
|
|
|
|
/*
|
|
|
|
* We will pass the symbols to the filter later, in
|
|
|
|
* map__split_kallsyms, when we have split the maps per module
|
|
|
|
*/
|
2016-09-02 04:54:31 +08:00
|
|
|
__symbols__insert(root, sym, !strchr(name, '['));
|
2010-04-19 13:32:50 +08:00
|
|
|
|
2010-01-05 02:19:26 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Loads the function entries in /proc/kallsyms into kernel_map->dso,
|
|
|
|
* so that we can in the next step set the symbol ->end address and then
|
|
|
|
* call kernel_maps__split_kallsyms.
|
|
|
|
*/
|
2018-04-28 02:36:15 +08:00
|
|
|
static int dso__load_all_kallsyms(struct dso *dso, const char *filename)
|
2010-01-05 02:19:26 +08:00
|
|
|
{
|
2018-04-28 02:36:15 +08:00
|
|
|
return kallsyms__parse(filename, dso, map__process_kallsym_symbol);
|
2010-01-05 02:19:26 +08:00
|
|
|
}
|
|
|
|
|
2019-11-26 08:58:33 +08:00
|
|
|
static int maps__split_kallsyms_for_kcore(struct maps *kmaps, struct dso *dso)
|
2013-08-07 19:38:51 +08:00
|
|
|
{
|
|
|
|
struct map *curr_map;
|
|
|
|
struct symbol *pos;
|
2015-11-06 21:59:29 +08:00
|
|
|
int count = 0;
|
2018-12-07 03:18:17 +08:00
|
|
|
struct rb_root_cached old_root = dso->symbols;
|
|
|
|
struct rb_root_cached *root = &dso->symbols;
|
|
|
|
struct rb_node *next = rb_first_cached(root);
|
2013-08-07 19:38:51 +08:00
|
|
|
|
2015-04-07 16:22:45 +08:00
|
|
|
if (!kmaps)
|
|
|
|
return -1;
|
|
|
|
|
2018-12-07 03:18:17 +08:00
|
|
|
*root = RB_ROOT_CACHED;
|
2015-11-06 21:59:29 +08:00
|
|
|
|
2013-08-07 19:38:51 +08:00
|
|
|
while (next) {
|
|
|
|
char *module;
|
|
|
|
|
|
|
|
pos = rb_entry(next, struct symbol, rb_node);
|
|
|
|
next = rb_next(&pos->rb_node);
|
|
|
|
|
2018-12-07 03:18:17 +08:00
|
|
|
rb_erase_cached(&pos->rb_node, &old_root);
|
|
|
|
RB_CLEAR_NODE(&pos->rb_node);
|
2013-08-07 19:38:51 +08:00
|
|
|
module = strchr(pos->name, '\t');
|
|
|
|
if (module)
|
|
|
|
*module = '\0';
|
|
|
|
|
2019-11-26 08:58:33 +08:00
|
|
|
curr_map = maps__find(kmaps, pos->start);
|
2013-08-07 19:38:51 +08:00
|
|
|
|
2016-09-02 06:25:52 +08:00
|
|
|
if (!curr_map) {
|
2013-08-07 19:38:51 +08:00
|
|
|
symbol__delete(pos);
|
2015-11-06 21:59:29 +08:00
|
|
|
continue;
|
2013-08-07 19:38:51 +08:00
|
|
|
}
|
2015-11-06 21:59:29 +08:00
|
|
|
|
|
|
|
pos->start -= curr_map->start - curr_map->pgoff;
|
2019-01-09 17:18:30 +08:00
|
|
|
if (pos->end > curr_map->end)
|
|
|
|
pos->end = curr_map->end;
|
2015-11-06 21:59:29 +08:00
|
|
|
if (pos->end)
|
|
|
|
pos->end -= curr_map->start - curr_map->pgoff;
|
2018-04-27 03:52:34 +08:00
|
|
|
symbols__insert(&curr_map->dso->symbols, pos);
|
2015-11-06 21:59:29 +08:00
|
|
|
++count;
|
2013-08-07 19:38:51 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Symbols have been adjusted */
|
|
|
|
dso->adjust_symbols = 1;
|
|
|
|
|
2015-11-06 21:59:29 +08:00
|
|
|
return count;
|
2013-08-07 19:38:51 +08:00
|
|
|
}
|
|
|
|
|
2009-10-08 00:48:56 +08:00
|
|
|
/*
|
|
|
|
* Split the symbols into maps, making sure there are no overlaps, i.e. the
|
|
|
|
* kernel range is broken in several maps, named [kernel].N, as we don't have
|
|
|
|
* the original ELF section names vmlinux have.
|
|
|
|
*/
|
2019-11-26 08:58:33 +08:00
|
|
|
static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
|
|
|
|
struct map *initial_map)
|
2009-10-08 00:48:56 +08:00
|
|
|
{
|
2015-04-07 16:22:45 +08:00
|
|
|
struct machine *machine;
|
2018-04-28 02:59:32 +08:00
|
|
|
struct map *curr_map = initial_map;
|
2009-10-08 00:48:56 +08:00
|
|
|
struct symbol *pos;
|
2014-12-18 04:24:45 +08:00
|
|
|
int count = 0, moved = 0;
|
2018-12-07 03:18:17 +08:00
|
|
|
struct rb_root_cached *root = &dso->symbols;
|
|
|
|
struct rb_node *next = rb_first_cached(root);
|
2009-10-08 00:48:56 +08:00
|
|
|
int kernel_range = 0;
|
2018-05-22 18:54:34 +08:00
|
|
|
bool x86_64;
|
2009-10-08 00:48:56 +08:00
|
|
|
|
2015-04-07 16:22:45 +08:00
|
|
|
if (!kmaps)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
machine = kmaps->machine;
|
|
|
|
|
2018-05-22 18:54:34 +08:00
|
|
|
x86_64 = machine__is(machine, "x86_64");
|
|
|
|
|
2009-10-08 00:48:56 +08:00
|
|
|
while (next) {
|
|
|
|
char *module;
|
|
|
|
|
|
|
|
pos = rb_entry(next, struct symbol, rb_node);
|
|
|
|
next = rb_next(&pos->rb_node);
|
|
|
|
|
|
|
|
module = strchr(pos->name, '\t');
|
|
|
|
if (module) {
|
2009-12-16 06:04:39 +08:00
|
|
|
if (!symbol_conf.use_modules)
|
2009-11-28 02:29:21 +08:00
|
|
|
goto discard_symbol;
|
|
|
|
|
2009-10-08 00:48:56 +08:00
|
|
|
*module++ = '\0';
|
|
|
|
|
perf tools: Encode kernel module mappings in perf.data
We were always looking at the running machine /proc/modules,
even when processing a perf.data file, which only makes sense
when we're doing 'perf record' and 'perf report' on the same
machine, and in close sucession, or if we don't use modules at
all, right Peter? ;-)
Now, at 'perf record' time we read /proc/modules, find the long
path for modules, and put them as PERF_MMAP events, just like we
did to encode the reloc reference symbol for vmlinux. Talking
about that now it is encoded in .pgoff, so that we can use
.{start,len} to store the address boundaries for the kernel so
that when we reconstruct the kmaps tree we can do lookups right
away, without having to fixup the end of the kernel maps like we
did in the past (and now only in perf record).
One more step in the 'perf archive' direction when we'll finally
be able to collect data in one machine and analyse in another.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1263396139-4798-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-01-13 23:22:17 +08:00
|
|
|
if (strcmp(curr_map->dso->short_name, module)) {
|
2018-04-28 02:59:32 +08:00
|
|
|
if (curr_map != initial_map &&
|
2020-08-08 20:21:54 +08:00
|
|
|
dso->kernel == DSO_SPACE__KERNEL_GUEST &&
|
2010-04-28 08:17:50 +08:00
|
|
|
machine__is_default_guest(machine)) {
|
2010-04-19 13:32:50 +08:00
|
|
|
/*
|
|
|
|
* We assume all symbols of a module are
|
|
|
|
* continuous in * kallsyms, so curr_map
|
|
|
|
* points to a module and all its
|
|
|
|
* symbols are in its kmap. Mark it as
|
|
|
|
* loaded.
|
|
|
|
*/
|
2018-04-27 03:52:34 +08:00
|
|
|
dso__set_loaded(curr_map->dso);
|
2010-04-19 13:32:50 +08:00
|
|
|
}
|
|
|
|
|
2019-11-26 08:58:33 +08:00
|
|
|
curr_map = maps__find_by_name(kmaps, module);
|
2009-11-28 02:29:18 +08:00
|
|
|
if (curr_map == NULL) {
|
2010-05-18 04:57:59 +08:00
|
|
|
pr_debug("%s/proc/{kallsyms,modules} "
|
perf tools: Encode kernel module mappings in perf.data
We were always looking at the running machine /proc/modules,
even when processing a perf.data file, which only makes sense
when we're doing 'perf record' and 'perf report' on the same
machine, and in close sucession, or if we don't use modules at
all, right Peter? ;-)
Now, at 'perf record' time we read /proc/modules, find the long
path for modules, and put them as PERF_MMAP events, just like we
did to encode the reloc reference symbol for vmlinux. Talking
about that now it is encoded in .pgoff, so that we can use
.{start,len} to store the address boundaries for the kernel so
that when we reconstruct the kmaps tree we can do lookups right
away, without having to fixup the end of the kernel maps like we
did in the past (and now only in perf record).
One more step in the 'perf archive' direction when we'll finally
be able to collect data in one machine and analyse in another.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1263396139-4798-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-01-13 23:22:17 +08:00
|
|
|
"inconsistency while looking "
|
2010-04-19 13:32:50 +08:00
|
|
|
"for \"%s\" module!\n",
|
2010-04-28 08:17:50 +08:00
|
|
|
machine->root_dir, module);
|
2018-04-28 02:59:32 +08:00
|
|
|
curr_map = initial_map;
|
2010-04-19 13:32:50 +08:00
|
|
|
goto discard_symbol;
|
2009-10-06 01:26:17 +08:00
|
|
|
}
|
perf tools: Encode kernel module mappings in perf.data
We were always looking at the running machine /proc/modules,
even when processing a perf.data file, which only makes sense
when we're doing 'perf record' and 'perf report' on the same
machine, and in close sucession, or if we don't use modules at
all, right Peter? ;-)
Now, at 'perf record' time we read /proc/modules, find the long
path for modules, and put them as PERF_MMAP events, just like we
did to encode the reloc reference symbol for vmlinux. Talking
about that now it is encoded in .pgoff, so that we can use
.{start,len} to store the address boundaries for the kernel so
that when we reconstruct the kmaps tree we can do lookups right
away, without having to fixup the end of the kernel maps like we
did in the past (and now only in perf record).
One more step in the 'perf archive' direction when we'll finally
be able to collect data in one machine and analyse in another.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1263396139-4798-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-01-13 23:22:17 +08:00
|
|
|
|
2010-04-19 13:32:50 +08:00
|
|
|
if (curr_map->dso->loaded &&
|
2010-04-28 08:17:50 +08:00
|
|
|
!machine__is_default_guest(machine))
|
perf tools: Encode kernel module mappings in perf.data
We were always looking at the running machine /proc/modules,
even when processing a perf.data file, which only makes sense
when we're doing 'perf record' and 'perf report' on the same
machine, and in close sucession, or if we don't use modules at
all, right Peter? ;-)
Now, at 'perf record' time we read /proc/modules, find the long
path for modules, and put them as PERF_MMAP events, just like we
did to encode the reloc reference symbol for vmlinux. Talking
about that now it is encoded in .pgoff, so that we can use
.{start,len} to store the address boundaries for the kernel so
that when we reconstruct the kmaps tree we can do lookups right
away, without having to fixup the end of the kernel maps like we
did in the past (and now only in perf record).
One more step in the 'perf archive' direction when we'll finally
be able to collect data in one machine and analyse in another.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1263396139-4798-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-01-13 23:22:17 +08:00
|
|
|
goto discard_symbol;
|
2009-10-06 01:26:17 +08:00
|
|
|
}
|
2009-10-08 00:48:56 +08:00
|
|
|
/*
|
|
|
|
* So that we look just like we get from .ko files,
|
2018-04-28 02:59:32 +08:00
|
|
|
* i.e. not prelinked, relative to initial_map->start.
|
2009-10-08 00:48:56 +08:00
|
|
|
*/
|
2009-11-28 02:29:18 +08:00
|
|
|
pos->start = curr_map->map_ip(curr_map, pos->start);
|
|
|
|
pos->end = curr_map->map_ip(curr_map, pos->end);
|
2018-05-22 18:54:34 +08:00
|
|
|
} else if (x86_64 && is_entry_trampoline(pos->name)) {
|
|
|
|
/*
|
|
|
|
* These symbols are not needed anymore since the
|
|
|
|
* trampoline maps refer to the text section and it's
|
|
|
|
* symbols instead. Avoid having to deal with
|
|
|
|
* relocations, and the assumption that the first symbol
|
|
|
|
* is the start of kernel text, by simply removing the
|
|
|
|
* symbols at this point.
|
|
|
|
*/
|
|
|
|
goto discard_symbol;
|
2018-04-28 02:59:32 +08:00
|
|
|
} else if (curr_map != initial_map) {
|
2009-10-08 00:48:56 +08:00
|
|
|
char dso_name[PATH_MAX];
|
2011-03-31 21:56:28 +08:00
|
|
|
struct dso *ndso;
|
2009-10-08 00:48:56 +08:00
|
|
|
|
2014-01-29 22:14:43 +08:00
|
|
|
if (delta) {
|
|
|
|
/* Kernel was relocated at boot time */
|
|
|
|
pos->start -= delta;
|
|
|
|
pos->end -= delta;
|
|
|
|
}
|
|
|
|
|
2010-11-29 22:44:15 +08:00
|
|
|
if (count == 0) {
|
2018-04-28 02:59:32 +08:00
|
|
|
curr_map = initial_map;
|
2016-09-02 06:25:52 +08:00
|
|
|
goto add_symbol;
|
2010-11-29 22:44:15 +08:00
|
|
|
}
|
|
|
|
|
2020-08-08 20:21:54 +08:00
|
|
|
if (dso->kernel == DSO_SPACE__KERNEL_GUEST)
|
2010-04-19 13:32:50 +08:00
|
|
|
snprintf(dso_name, sizeof(dso_name),
|
|
|
|
"[guest.kernel].%d",
|
|
|
|
kernel_range++);
|
|
|
|
else
|
|
|
|
snprintf(dso_name, sizeof(dso_name),
|
|
|
|
"[kernel].%d",
|
|
|
|
kernel_range++);
|
2009-10-08 00:48:56 +08:00
|
|
|
|
2011-03-31 21:56:28 +08:00
|
|
|
ndso = dso__new(dso_name);
|
|
|
|
if (ndso == NULL)
|
2009-10-08 00:48:56 +08:00
|
|
|
return -1;
|
|
|
|
|
2011-03-31 21:56:28 +08:00
|
|
|
ndso->kernel = dso->kernel;
|
2010-04-19 13:32:50 +08:00
|
|
|
|
2018-04-27 03:52:34 +08:00
|
|
|
curr_map = map__new2(pos->start, ndso);
|
2010-02-25 11:00:51 +08:00
|
|
|
if (curr_map == NULL) {
|
2015-06-02 22:53:26 +08:00
|
|
|
dso__put(ndso);
|
2009-10-08 00:48:56 +08:00
|
|
|
return -1;
|
|
|
|
}
|
2009-05-29 01:55:04 +08:00
|
|
|
|
2009-11-28 02:29:18 +08:00
|
|
|
curr_map->map_ip = curr_map->unmap_ip = identity__map_ip;
|
2019-11-26 08:58:33 +08:00
|
|
|
maps__insert(kmaps, curr_map);
|
2009-10-08 00:48:56 +08:00
|
|
|
++kernel_range;
|
2014-01-29 22:14:43 +08:00
|
|
|
} else if (delta) {
|
|
|
|
/* Kernel was relocated at boot time */
|
|
|
|
pos->start -= delta;
|
|
|
|
pos->end -= delta;
|
2009-10-08 00:48:56 +08:00
|
|
|
}
|
2016-09-02 06:25:52 +08:00
|
|
|
add_symbol:
|
2018-04-28 02:59:32 +08:00
|
|
|
if (curr_map != initial_map) {
|
2018-12-07 03:18:17 +08:00
|
|
|
rb_erase_cached(&pos->rb_node, root);
|
2018-04-27 03:52:34 +08:00
|
|
|
symbols__insert(&curr_map->dso->symbols, pos);
|
2016-09-02 06:25:52 +08:00
|
|
|
++moved;
|
|
|
|
} else
|
|
|
|
++count;
|
|
|
|
|
|
|
|
continue;
|
|
|
|
discard_symbol:
|
2018-12-07 03:18:17 +08:00
|
|
|
rb_erase_cached(&pos->rb_node, root);
|
2016-09-02 06:25:52 +08:00
|
|
|
symbol__delete(pos);
|
2009-05-29 01:55:04 +08:00
|
|
|
}
|
|
|
|
|
2018-04-28 02:59:32 +08:00
|
|
|
if (curr_map != initial_map &&
|
2020-08-08 20:21:54 +08:00
|
|
|
dso->kernel == DSO_SPACE__KERNEL_GUEST &&
|
2010-04-28 08:17:50 +08:00
|
|
|
machine__is_default_guest(kmaps->machine)) {
|
2018-04-27 03:52:34 +08:00
|
|
|
dso__set_loaded(curr_map->dso);
|
2010-04-19 13:32:50 +08:00
|
|
|
}
|
|
|
|
|
2010-11-29 22:44:15 +08:00
|
|
|
return count + moved;
|
2009-10-08 00:48:56 +08:00
|
|
|
}
|
2009-05-29 01:55:04 +08:00
|
|
|
|
2012-12-08 04:39:39 +08:00
|
|
|
bool symbol__restricted_filename(const char *filename,
|
|
|
|
const char *restricted_filename)
|
perf symbols: Handle /proc/sys/kernel/kptr_restrict
Perf uses /proc/modules to figure out where kernel modules are loaded.
With the advent of kptr_restrict, non root users get zeroes for all module
start addresses.
So check if kptr_restrict is non zero and don't generate the syntethic
PERF_RECORD_MMAP events for them.
Warn the user about it in perf record and in perf report.
In perf report the reference relocation symbol being zero means that
kptr_restrict was set, thus /proc/kallsyms has only zeroed addresses, so don't
use it to fixup symbol addresses when using a valid kallsyms (in the buildid
cache) or vmlinux (in the vmlinux path) build-id located automatically or
specified by the user.
Provide an explanation about it in 'perf report' if kernel samples were taken,
checking if a suitable vmlinux or kallsyms was found/specified.
Restricted /proc/kallsyms don't go to the buildid cache anymore.
Example:
[acme@emilia ~]$ perf record -F 100000 sleep 1
WARNING: Kernel address maps (/proc/{kallsyms,modules}) are restricted, check
/proc/sys/kernel/kptr_restrict.
Samples in kernel functions may not be resolved if a suitable vmlinux file is
not found in the buildid cache or in the vmlinux path.
Samples in kernel modules won't be resolved at all.
If some relocation was applied (e.g. kexec) symbols may be misresolved even
with a suitable vmlinux or kallsyms file.
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.005 MB perf.data (~231 samples) ]
[acme@emilia ~]$
[acme@emilia ~]$ perf report --stdio
Kernel address maps (/proc/{kallsyms,modules}) were restricted,
check /proc/sys/kernel/kptr_restrict before running 'perf record'.
If some relocation was applied (e.g. kexec) symbols may be misresolved.
Samples in kernel modules can't be resolved as well.
# Events: 13 cycles
#
# Overhead Command Shared Object Symbol
# ........ ....... ................. .....................
#
20.24% sleep [kernel.kallsyms] [k] page_fault
20.04% sleep [kernel.kallsyms] [k] filemap_fault
19.78% sleep [kernel.kallsyms] [k] __lru_cache_add
19.69% sleep ld-2.12.so [.] memcpy
14.71% sleep [kernel.kallsyms] [k] dput
4.70% sleep [kernel.kallsyms] [k] flush_signal_handlers
0.73% sleep [kernel.kallsyms] [k] perf_event_comm
0.11% sleep [kernel.kallsyms] [k] native_write_msr_safe
#
# (For a higher level overview, try: perf report --sort comm,dso)
#
[acme@emilia ~]$
This is because it found a suitable vmlinux (build-id checked) in
/lib/modules/2.6.39-rc7+/build/vmlinux (use -v in perf report to see the long
file name).
If we remove that file from the vmlinux path:
[root@emilia ~]# mv /lib/modules/2.6.39-rc7+/build/vmlinux \
/lib/modules/2.6.39-rc7+/build/vmlinux.OFF
[acme@emilia ~]$ perf report --stdio
[kernel.kallsyms] with build id 57298cdbe0131f6871667ec0eaab4804dcf6f562
not found, continuing without symbols
Kernel address maps (/proc/{kallsyms,modules}) were restricted, check
/proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples can't be
resolved.
Samples in kernel modules can't be resolved as well.
# Events: 13 cycles
#
# Overhead Command Shared Object Symbol
# ........ ....... ................. ......
#
80.31% sleep [kernel.kallsyms] [k] 0xffffffff8103425a
19.69% sleep ld-2.12.so [.] memcpy
#
# (For a higher level overview, try: perf report --sort comm,dso)
#
[acme@emilia ~]$
Reported-by: Stephane Eranian <eranian@google.com>
Suggested-by: David Miller <davem@davemloft.net>
Cc: Dave Jones <davej@redhat.com>
Cc: David Miller <davem@davemloft.net>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Kees Cook <kees.cook@canonical.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Tom Zanussi <tzanussi@gmail.com>
Link: http://lkml.kernel.org/n/tip-mt512joaxxbhhp1odop04yit@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2011-05-26 20:53:51 +08:00
|
|
|
{
|
|
|
|
bool restricted = false;
|
|
|
|
|
|
|
|
if (symbol_conf.kptr_restrict) {
|
|
|
|
char *r = realpath(filename, NULL);
|
|
|
|
|
|
|
|
if (r != NULL) {
|
|
|
|
restricted = strcmp(r, restricted_filename) == 0;
|
|
|
|
free(r);
|
|
|
|
return restricted;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return restricted;
|
|
|
|
}
|
|
|
|
|
2013-10-09 20:01:11 +08:00
|
|
|
struct module_info {
|
|
|
|
struct rb_node rb_node;
|
|
|
|
char *name;
|
|
|
|
u64 start;
|
2013-08-07 19:38:51 +08:00
|
|
|
};
|
|
|
|
|
2013-10-09 20:01:11 +08:00
|
|
|
static void add_module(struct module_info *mi, struct rb_root *modules)
|
2013-08-07 19:38:51 +08:00
|
|
|
{
|
2013-10-09 20:01:11 +08:00
|
|
|
struct rb_node **p = &modules->rb_node;
|
|
|
|
struct rb_node *parent = NULL;
|
|
|
|
struct module_info *m;
|
2013-08-07 19:38:51 +08:00
|
|
|
|
2013-10-09 20:01:11 +08:00
|
|
|
while (*p != NULL) {
|
|
|
|
parent = *p;
|
|
|
|
m = rb_entry(parent, struct module_info, rb_node);
|
|
|
|
if (strcmp(mi->name, m->name) < 0)
|
|
|
|
p = &(*p)->rb_left;
|
|
|
|
else
|
|
|
|
p = &(*p)->rb_right;
|
|
|
|
}
|
|
|
|
rb_link_node(&mi->rb_node, parent, p);
|
|
|
|
rb_insert_color(&mi->rb_node, modules);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void delete_modules(struct rb_root *modules)
|
|
|
|
{
|
|
|
|
struct module_info *mi;
|
|
|
|
struct rb_node *next = rb_first(modules);
|
|
|
|
|
|
|
|
while (next) {
|
|
|
|
mi = rb_entry(next, struct module_info, rb_node);
|
|
|
|
next = rb_next(&mi->rb_node);
|
|
|
|
rb_erase(&mi->rb_node, modules);
|
2013-12-28 03:55:14 +08:00
|
|
|
zfree(&mi->name);
|
2013-10-09 20:01:11 +08:00
|
|
|
free(mi);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct module_info *find_module(const char *name,
|
|
|
|
struct rb_root *modules)
|
|
|
|
{
|
|
|
|
struct rb_node *n = modules->rb_node;
|
|
|
|
|
|
|
|
while (n) {
|
|
|
|
struct module_info *m;
|
|
|
|
int cmp;
|
|
|
|
|
|
|
|
m = rb_entry(n, struct module_info, rb_node);
|
|
|
|
cmp = strcmp(name, m->name);
|
|
|
|
if (cmp < 0)
|
|
|
|
n = n->rb_left;
|
|
|
|
else if (cmp > 0)
|
|
|
|
n = n->rb_right;
|
|
|
|
else
|
|
|
|
return m;
|
|
|
|
}
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2017-08-03 21:49:02 +08:00
|
|
|
static int __read_proc_modules(void *arg, const char *name, u64 start,
|
|
|
|
u64 size __maybe_unused)
|
2013-10-09 20:01:11 +08:00
|
|
|
{
|
|
|
|
struct rb_root *modules = arg;
|
|
|
|
struct module_info *mi;
|
|
|
|
|
|
|
|
mi = zalloc(sizeof(struct module_info));
|
|
|
|
if (!mi)
|
2013-08-07 19:38:51 +08:00
|
|
|
return -ENOMEM;
|
|
|
|
|
2013-10-09 20:01:11 +08:00
|
|
|
mi->name = strdup(name);
|
|
|
|
mi->start = start;
|
2013-08-07 19:38:51 +08:00
|
|
|
|
2013-10-09 20:01:11 +08:00
|
|
|
if (!mi->name) {
|
|
|
|
free(mi);
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
add_module(mi, modules);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int read_proc_modules(const char *filename, struct rb_root *modules)
|
|
|
|
{
|
|
|
|
if (symbol__restricted_filename(filename, "/proc/modules"))
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
if (modules__parse(filename, modules, __read_proc_modules)) {
|
|
|
|
delete_modules(modules);
|
|
|
|
return -1;
|
|
|
|
}
|
2013-08-07 19:38:51 +08:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2013-10-14 21:57:29 +08:00
|
|
|
int compare_proc_modules(const char *from, const char *to)
|
|
|
|
{
|
|
|
|
struct rb_root from_modules = RB_ROOT;
|
|
|
|
struct rb_root to_modules = RB_ROOT;
|
|
|
|
struct rb_node *from_node, *to_node;
|
|
|
|
struct module_info *from_m, *to_m;
|
|
|
|
int ret = -1;
|
|
|
|
|
|
|
|
if (read_proc_modules(from, &from_modules))
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
if (read_proc_modules(to, &to_modules))
|
|
|
|
goto out_delete_from;
|
|
|
|
|
|
|
|
from_node = rb_first(&from_modules);
|
|
|
|
to_node = rb_first(&to_modules);
|
|
|
|
while (from_node) {
|
|
|
|
if (!to_node)
|
|
|
|
break;
|
|
|
|
|
|
|
|
from_m = rb_entry(from_node, struct module_info, rb_node);
|
|
|
|
to_m = rb_entry(to_node, struct module_info, rb_node);
|
|
|
|
|
|
|
|
if (from_m->start != to_m->start ||
|
|
|
|
strcmp(from_m->name, to_m->name))
|
|
|
|
break;
|
|
|
|
|
|
|
|
from_node = rb_next(from_node);
|
|
|
|
to_node = rb_next(to_node);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!from_node && !to_node)
|
|
|
|
ret = 0;
|
|
|
|
|
|
|
|
delete_modules(&to_modules);
|
|
|
|
out_delete_from:
|
|
|
|
delete_modules(&from_modules);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2019-11-26 08:58:33 +08:00
|
|
|
static int do_validate_kcore_modules(const char *filename, struct maps *kmaps)
|
2013-10-09 20:01:11 +08:00
|
|
|
{
|
|
|
|
struct rb_root modules = RB_ROOT;
|
|
|
|
struct map *old_map;
|
|
|
|
int err;
|
|
|
|
|
|
|
|
err = read_proc_modules(filename, &modules);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
|
2019-11-26 08:58:33 +08:00
|
|
|
maps__for_each_entry(kmaps, old_map) {
|
2013-10-09 20:01:11 +08:00
|
|
|
struct module_info *mi;
|
|
|
|
|
2018-05-22 18:54:35 +08:00
|
|
|
if (!__map__is_kmodule(old_map)) {
|
2013-10-09 20:01:11 +08:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Module must be in memory at the same address */
|
|
|
|
mi = find_module(old_map->dso->short_name, &modules);
|
|
|
|
if (!mi || mi->start != old_map->start) {
|
|
|
|
err = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
out:
|
|
|
|
delete_modules(&modules);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2013-08-07 19:38:51 +08:00
|
|
|
/*
|
2013-10-09 20:01:11 +08:00
|
|
|
* If kallsyms is referenced by name then we look for filename in the same
|
2013-08-07 19:38:51 +08:00
|
|
|
* directory.
|
|
|
|
*/
|
2013-10-09 20:01:11 +08:00
|
|
|
static bool filename_from_kallsyms_filename(char *filename,
|
|
|
|
const char *base_name,
|
|
|
|
const char *kallsyms_filename)
|
2013-08-07 19:38:51 +08:00
|
|
|
{
|
|
|
|
char *name;
|
|
|
|
|
2013-10-09 20:01:11 +08:00
|
|
|
strcpy(filename, kallsyms_filename);
|
|
|
|
name = strrchr(filename, '/');
|
2013-08-07 19:38:51 +08:00
|
|
|
if (!name)
|
|
|
|
return false;
|
|
|
|
|
2013-10-09 20:01:11 +08:00
|
|
|
name += 1;
|
|
|
|
|
|
|
|
if (!strcmp(name, "kallsyms")) {
|
|
|
|
strcpy(name, base_name);
|
2013-08-07 19:38:51 +08:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2013-10-09 20:01:11 +08:00
|
|
|
static int validate_kcore_modules(const char *kallsyms_filename,
|
|
|
|
struct map *map)
|
|
|
|
{
|
2019-11-26 08:58:33 +08:00
|
|
|
struct maps *kmaps = map__kmaps(map);
|
2013-10-09 20:01:11 +08:00
|
|
|
char modules_filename[PATH_MAX];
|
|
|
|
|
2015-04-07 16:22:45 +08:00
|
|
|
if (!kmaps)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2013-10-09 20:01:11 +08:00
|
|
|
if (!filename_from_kallsyms_filename(modules_filename, "modules",
|
|
|
|
kallsyms_filename))
|
|
|
|
return -EINVAL;
|
|
|
|
|
2018-05-22 18:54:35 +08:00
|
|
|
if (do_validate_kcore_modules(modules_filename, kmaps))
|
2013-10-09 20:01:11 +08:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-01-29 22:14:41 +08:00
|
|
|
static int validate_kcore_addresses(const char *kallsyms_filename,
|
|
|
|
struct map *map)
|
|
|
|
{
|
|
|
|
struct kmap *kmap = map__kmap(map);
|
|
|
|
|
2015-04-07 16:22:45 +08:00
|
|
|
if (!kmap)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2014-01-29 22:14:41 +08:00
|
|
|
if (kmap->ref_reloc_sym && kmap->ref_reloc_sym->name) {
|
|
|
|
u64 start;
|
|
|
|
|
perf symbols: Accept symbols starting at address 0
That is the case of _text on s390, and we have some functions that return an
address, using address zero to report problems, oops.
This would lead the symbol loading routines to not use "_text" as the reference
relocation symbol, or the first symbol for the kernel, but use instead
"_stext", that is at the same address on x86_64 and others, but not on s390:
[acme@localhost perf-4.11.0-rc6]$ head -15 /proc/kallsyms
0000000000000000 T _text
0000000000000418 t iplstart
0000000000000800 T start
000000000000080a t .base
000000000000082e t .sk8x8
0000000000000834 t .gotr
0000000000000842 t .cmd
0000000000000846 t .parm
000000000000084a t .lowcase
0000000000010000 T startup
0000000000010010 T startup_kdump
0000000000010214 t startup_kdump_relocated
0000000000011000 T startup_continue
00000000000112a0 T _ehead
0000000000100000 T _stext
[acme@localhost perf-4.11.0-rc6]$
Which in turn would make 'perf test vmlinux' to fail because it wouldn't find
the symbols before "_stext" in kallsyms.
Fix it by using the return value only for errors and storing the
address, when the symbol is successfully found, in a provided pointer
arg.
Before this patch:
After:
[acme@localhost perf-4.11.0-rc6]$ tools/perf/perf test -v 1
1: vmlinux symtab matches kallsyms :
--- start ---
test child forked, pid 40693
Looking at the vmlinux_path (8 entries long)
Using /usr/lib/debug/lib/modules/3.10.0-654.el7.s390x/vmlinux for symbols
ERR : 0: _text not on kallsyms
ERR : 0x418: iplstart not on kallsyms
ERR : 0x800: start not on kallsyms
ERR : 0x80a: .base not on kallsyms
ERR : 0x82e: .sk8x8 not on kallsyms
ERR : 0x834: .gotr not on kallsyms
ERR : 0x842: .cmd not on kallsyms
ERR : 0x846: .parm not on kallsyms
ERR : 0x84a: .lowcase not on kallsyms
ERR : 0x10000: startup not on kallsyms
ERR : 0x10010: startup_kdump not on kallsyms
ERR : 0x10214: startup_kdump_relocated not on kallsyms
ERR : 0x11000: startup_continue not on kallsyms
ERR : 0x112a0: _ehead not on kallsyms
<SNIP warnings>
test child finished with -1
---- end ----
vmlinux symtab matches kallsyms: FAILED!
[acme@localhost perf-4.11.0-rc6]$
After:
[acme@localhost perf-4.11.0-rc6]$ tools/perf/perf test -v 1
1: vmlinux symtab matches kallsyms :
--- start ---
test child forked, pid 47160
<SNIP warnings>
test child finished with 0
---- end ----
vmlinux symtab matches kallsyms: Ok
[acme@localhost perf-4.11.0-rc6]$
Reported-by: Michael Petlan <mpetlan@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-9x9bwgd3btwdk1u51xie93fz@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-04-28 08:21:09 +08:00
|
|
|
if (kallsyms__get_function_start(kallsyms_filename,
|
|
|
|
kmap->ref_reloc_sym->name, &start))
|
|
|
|
return -ENOENT;
|
2014-01-29 22:14:41 +08:00
|
|
|
if (start != kmap->ref_reloc_sym->addr)
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
return validate_kcore_modules(kallsyms_filename, map);
|
|
|
|
}
|
|
|
|
|
2013-10-09 20:01:11 +08:00
|
|
|
struct kcore_mapfn_data {
|
|
|
|
struct dso *dso;
|
|
|
|
struct list_head maps;
|
|
|
|
};
|
|
|
|
|
|
|
|
static int kcore_mapfn(u64 start, u64 len, u64 pgoff, void *data)
|
|
|
|
{
|
|
|
|
struct kcore_mapfn_data *md = data;
|
|
|
|
struct map *map;
|
|
|
|
|
2018-04-27 03:52:34 +08:00
|
|
|
map = map__new2(start, md->dso);
|
2013-10-09 20:01:11 +08:00
|
|
|
if (map == NULL)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
map->end = map->start + len;
|
|
|
|
map->pgoff = pgoff;
|
|
|
|
|
|
|
|
list_add(&map->node, &md->maps);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2019-05-08 21:20:06 +08:00
|
|
|
/*
|
2019-11-26 08:58:33 +08:00
|
|
|
* Merges map into maps by splitting the new map within the existing map
|
|
|
|
* regions.
|
2019-05-08 21:20:06 +08:00
|
|
|
*/
|
2019-11-26 08:58:33 +08:00
|
|
|
int maps__merge_in(struct maps *kmaps, struct map *new_map)
|
2019-05-08 21:20:06 +08:00
|
|
|
{
|
|
|
|
struct map *old_map;
|
|
|
|
LIST_HEAD(merged);
|
|
|
|
|
2019-11-26 08:58:33 +08:00
|
|
|
maps__for_each_entry(kmaps, old_map) {
|
2019-05-08 21:20:06 +08:00
|
|
|
/* no overload with this one */
|
|
|
|
if (new_map->end < old_map->start ||
|
|
|
|
new_map->start >= old_map->end)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (new_map->start < old_map->start) {
|
|
|
|
/*
|
|
|
|
* |new......
|
|
|
|
* |old....
|
|
|
|
*/
|
|
|
|
if (new_map->end < old_map->end) {
|
|
|
|
/*
|
|
|
|
* |new......| -> |new..|
|
|
|
|
* |old....| -> |old....|
|
|
|
|
*/
|
|
|
|
new_map->end = old_map->start;
|
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* |new.............| -> |new..| |new..|
|
|
|
|
* |old....| -> |old....|
|
|
|
|
*/
|
|
|
|
struct map *m = map__clone(new_map);
|
|
|
|
|
|
|
|
if (!m)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
m->end = old_map->start;
|
|
|
|
list_add_tail(&m->node, &merged);
|
2020-06-02 19:25:05 +08:00
|
|
|
new_map->pgoff += old_map->end - new_map->start;
|
2019-05-08 21:20:06 +08:00
|
|
|
new_map->start = old_map->end;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* |new......
|
|
|
|
* |old....
|
|
|
|
*/
|
|
|
|
if (new_map->end < old_map->end) {
|
|
|
|
/*
|
|
|
|
* |new..| -> x
|
|
|
|
* |old.........| -> |old.........|
|
|
|
|
*/
|
|
|
|
map__put(new_map);
|
|
|
|
new_map = NULL;
|
|
|
|
break;
|
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* |new......| -> |new...|
|
|
|
|
* |old....| -> |old....|
|
|
|
|
*/
|
2020-06-02 19:25:05 +08:00
|
|
|
new_map->pgoff += old_map->end - new_map->start;
|
2019-05-08 21:20:06 +08:00
|
|
|
new_map->start = old_map->end;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
while (!list_empty(&merged)) {
|
|
|
|
old_map = list_entry(merged.next, struct map, node);
|
|
|
|
list_del_init(&old_map->node);
|
2019-11-26 08:58:33 +08:00
|
|
|
maps__insert(kmaps, old_map);
|
2019-05-08 21:20:06 +08:00
|
|
|
map__put(old_map);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (new_map) {
|
2019-11-26 08:58:33 +08:00
|
|
|
maps__insert(kmaps, new_map);
|
2019-05-08 21:20:06 +08:00
|
|
|
map__put(new_map);
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2013-08-07 19:38:51 +08:00
|
|
|
static int dso__load_kcore(struct dso *dso, struct map *map,
|
|
|
|
const char *kallsyms_filename)
|
|
|
|
{
|
2019-11-26 08:58:33 +08:00
|
|
|
struct maps *kmaps = map__kmaps(map);
|
2013-08-07 19:38:51 +08:00
|
|
|
struct kcore_mapfn_data md;
|
2019-10-28 22:55:28 +08:00
|
|
|
struct map *old_map, *new_map, *replacement_map = NULL, *next;
|
2018-05-22 18:54:36 +08:00
|
|
|
struct machine *machine;
|
2013-08-07 19:38:51 +08:00
|
|
|
bool is_64_bit;
|
|
|
|
int err, fd;
|
|
|
|
char kcore_filename[PATH_MAX];
|
2018-05-09 19:43:34 +08:00
|
|
|
u64 stext;
|
2013-08-07 19:38:51 +08:00
|
|
|
|
2015-04-07 16:22:45 +08:00
|
|
|
if (!kmaps)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2018-05-22 18:54:36 +08:00
|
|
|
machine = kmaps->machine;
|
|
|
|
|
2013-08-07 19:38:51 +08:00
|
|
|
/* This function requires that the map is the kernel map */
|
2018-04-24 03:43:47 +08:00
|
|
|
if (!__map__is_kernel(map))
|
2013-08-07 19:38:51 +08:00
|
|
|
return -EINVAL;
|
|
|
|
|
2013-10-09 20:01:11 +08:00
|
|
|
if (!filename_from_kallsyms_filename(kcore_filename, "kcore",
|
|
|
|
kallsyms_filename))
|
|
|
|
return -EINVAL;
|
|
|
|
|
2014-01-29 22:14:41 +08:00
|
|
|
/* Modules and kernel must be present at their original addresses */
|
|
|
|
if (validate_kcore_addresses(kallsyms_filename, map))
|
2013-08-07 19:38:51 +08:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
md.dso = dso;
|
|
|
|
INIT_LIST_HEAD(&md.maps);
|
|
|
|
|
|
|
|
fd = open(kcore_filename, O_RDONLY);
|
2015-06-19 16:57:33 +08:00
|
|
|
if (fd < 0) {
|
2015-08-20 18:07:40 +08:00
|
|
|
pr_debug("Failed to open %s. Note /proc/kcore requires CAP_SYS_RAWIO capability to access.\n",
|
|
|
|
kcore_filename);
|
2013-08-07 19:38:51 +08:00
|
|
|
return -EINVAL;
|
2015-06-19 16:57:33 +08:00
|
|
|
}
|
2013-08-07 19:38:51 +08:00
|
|
|
|
|
|
|
/* Read new maps into temporary lists */
|
2018-04-27 03:11:47 +08:00
|
|
|
err = file__read_maps(fd, map->prot & PROT_EXEC, kcore_mapfn, &md,
|
2013-08-07 19:38:51 +08:00
|
|
|
&is_64_bit);
|
|
|
|
if (err)
|
|
|
|
goto out_err;
|
2014-07-14 18:02:41 +08:00
|
|
|
dso->is_64_bit = is_64_bit;
|
2013-08-07 19:38:51 +08:00
|
|
|
|
|
|
|
if (list_empty(&md.maps)) {
|
|
|
|
err = -EINVAL;
|
|
|
|
goto out_err;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Remove old maps */
|
2019-11-26 08:58:33 +08:00
|
|
|
maps__for_each_entry_safe(kmaps, old_map, next) {
|
2019-05-08 21:20:06 +08:00
|
|
|
/*
|
|
|
|
* We need to preserve eBPF maps even if they are
|
|
|
|
* covered by kcore, because we need to access
|
|
|
|
* eBPF dso for source data.
|
|
|
|
*/
|
|
|
|
if (old_map != map && !__map__is_bpf_prog(old_map))
|
2019-11-26 08:58:33 +08:00
|
|
|
maps__remove(kmaps, old_map);
|
2013-08-07 19:38:51 +08:00
|
|
|
}
|
2018-05-22 18:54:36 +08:00
|
|
|
machine->trampolines_mapped = false;
|
2013-08-07 19:38:51 +08:00
|
|
|
|
2018-05-09 19:43:34 +08:00
|
|
|
/* Find the kernel map using the '_stext' symbol */
|
|
|
|
if (!kallsyms__get_function_start(kallsyms_filename, "_stext", &stext)) {
|
perf symbols: Symbol lookup with kcore can fail if multiple segments match stext
This problem was encountered on an arm64 system with a lot of memory.
Without kernel debug symbols installed, and with both kcore and kallsyms
available, perf managed to get confused and returned "unknown" for all
of the kernel symbols that it tried to look up.
On this system, stext fell within the vmalloc segment. The kcore symbol
matching code tries to find the first segment that contains stext and
uses that to replace the segment generated from just the kallsyms
information. In this case, however, there were two: a very large
vmalloc segment, and the text segment. This caused perf to get confused
because multiple overlapping segments were inserted into the RB tree
that holds the discovered segments. However, that alone wasn't
sufficient to cause the problem. Even when we could find the segment,
the offsets were adjusted in such a way that the newly generated symbols
didn't line up with the instruction addresses in the trace. The most
obvious solution would be to consult which segment type is text from
kcore, but this information is not exposed to users.
Instead, select the smallest matching segment that contains stext
instead of the first matching segment. This allows us to match the text
segment instead of vmalloc, if one is contained within the other.
Reviewed-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Krister Johansen <kjlx@templeofstupid.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Reaver <me@davidreaver.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20230125183418.GD1963@templeofstupid.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-26 02:34:18 +08:00
|
|
|
u64 replacement_size = 0;
|
|
|
|
|
2018-05-09 19:43:34 +08:00
|
|
|
list_for_each_entry(new_map, &md.maps, node) {
|
perf symbols: Symbol lookup with kcore can fail if multiple segments match stext
This problem was encountered on an arm64 system with a lot of memory.
Without kernel debug symbols installed, and with both kcore and kallsyms
available, perf managed to get confused and returned "unknown" for all
of the kernel symbols that it tried to look up.
On this system, stext fell within the vmalloc segment. The kcore symbol
matching code tries to find the first segment that contains stext and
uses that to replace the segment generated from just the kallsyms
information. In this case, however, there were two: a very large
vmalloc segment, and the text segment. This caused perf to get confused
because multiple overlapping segments were inserted into the RB tree
that holds the discovered segments. However, that alone wasn't
sufficient to cause the problem. Even when we could find the segment,
the offsets were adjusted in such a way that the newly generated symbols
didn't line up with the instruction addresses in the trace. The most
obvious solution would be to consult which segment type is text from
kcore, but this information is not exposed to users.
Instead, select the smallest matching segment that contains stext
instead of the first matching segment. This allows us to match the text
segment instead of vmalloc, if one is contained within the other.
Reviewed-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Krister Johansen <kjlx@templeofstupid.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Reaver <me@davidreaver.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20230125183418.GD1963@templeofstupid.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-26 02:34:18 +08:00
|
|
|
u64 new_size = new_map->end - new_map->start;
|
|
|
|
|
|
|
|
if (!(stext >= new_map->start && stext < new_map->end))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* On some architectures, ARM64 for example, the kernel
|
|
|
|
* text can get allocated inside of the vmalloc segment.
|
|
|
|
* Select the smallest matching segment, in case stext
|
|
|
|
* falls within more than one in the list.
|
|
|
|
*/
|
|
|
|
if (!replacement_map || new_size < replacement_size) {
|
2018-05-09 19:43:34 +08:00
|
|
|
replacement_map = new_map;
|
perf symbols: Symbol lookup with kcore can fail if multiple segments match stext
This problem was encountered on an arm64 system with a lot of memory.
Without kernel debug symbols installed, and with both kcore and kallsyms
available, perf managed to get confused and returned "unknown" for all
of the kernel symbols that it tried to look up.
On this system, stext fell within the vmalloc segment. The kcore symbol
matching code tries to find the first segment that contains stext and
uses that to replace the segment generated from just the kallsyms
information. In this case, however, there were two: a very large
vmalloc segment, and the text segment. This caused perf to get confused
because multiple overlapping segments were inserted into the RB tree
that holds the discovered segments. However, that alone wasn't
sufficient to cause the problem. Even when we could find the segment,
the offsets were adjusted in such a way that the newly generated symbols
didn't line up with the instruction addresses in the trace. The most
obvious solution would be to consult which segment type is text from
kcore, but this information is not exposed to users.
Instead, select the smallest matching segment that contains stext
instead of the first matching segment. This allows us to match the text
segment instead of vmalloc, if one is contained within the other.
Reviewed-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Krister Johansen <kjlx@templeofstupid.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Reaver <me@davidreaver.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20230125183418.GD1963@templeofstupid.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-01-26 02:34:18 +08:00
|
|
|
replacement_size = new_size;
|
2018-05-09 19:43:34 +08:00
|
|
|
}
|
2013-08-07 19:38:51 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!replacement_map)
|
|
|
|
replacement_map = list_entry(md.maps.next, struct map, node);
|
|
|
|
|
|
|
|
/* Add new maps */
|
|
|
|
while (!list_empty(&md.maps)) {
|
|
|
|
new_map = list_entry(md.maps.next, struct map, node);
|
2015-05-26 02:30:09 +08:00
|
|
|
list_del_init(&new_map->node);
|
2013-08-07 19:38:51 +08:00
|
|
|
if (new_map == replacement_map) {
|
|
|
|
map->start = new_map->start;
|
|
|
|
map->end = new_map->end;
|
|
|
|
map->pgoff = new_map->pgoff;
|
|
|
|
map->map_ip = new_map->map_ip;
|
|
|
|
map->unmap_ip = new_map->unmap_ip;
|
|
|
|
/* Ensure maps are correctly ordered */
|
2015-05-26 03:59:56 +08:00
|
|
|
map__get(map);
|
2019-11-26 08:58:33 +08:00
|
|
|
maps__remove(kmaps, map);
|
|
|
|
maps__insert(kmaps, map);
|
2015-05-26 03:59:56 +08:00
|
|
|
map__put(map);
|
2019-05-08 21:20:06 +08:00
|
|
|
map__put(new_map);
|
2013-08-07 19:38:51 +08:00
|
|
|
} else {
|
2019-05-08 21:20:06 +08:00
|
|
|
/*
|
|
|
|
* Merge kcore map into existing maps,
|
|
|
|
* and ensure that current maps (eBPF)
|
|
|
|
* stay intact.
|
|
|
|
*/
|
2019-11-26 08:58:33 +08:00
|
|
|
if (maps__merge_in(kmaps, new_map))
|
2019-05-08 21:20:06 +08:00
|
|
|
goto out_err;
|
2013-08-07 19:38:51 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-05-22 18:54:36 +08:00
|
|
|
if (machine__is(machine, "x86_64")) {
|
|
|
|
u64 addr;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If one of the corresponding symbols is there, assume the
|
|
|
|
* entry trampoline maps are too.
|
|
|
|
*/
|
|
|
|
if (!kallsyms__get_function_start(kallsyms_filename,
|
|
|
|
ENTRY_TRAMPOLINE_NAME,
|
|
|
|
&addr))
|
|
|
|
machine->trampolines_mapped = true;
|
|
|
|
}
|
|
|
|
|
2013-08-07 19:38:51 +08:00
|
|
|
/*
|
|
|
|
* Set the data type and long name so that kcore can be read via
|
|
|
|
* dso__data_read_addr().
|
|
|
|
*/
|
2020-08-08 20:21:54 +08:00
|
|
|
if (dso->kernel == DSO_SPACE__KERNEL_GUEST)
|
2013-12-18 03:14:07 +08:00
|
|
|
dso->binary_type = DSO_BINARY_TYPE__GUEST_KCORE;
|
2013-08-07 19:38:51 +08:00
|
|
|
else
|
2013-12-18 03:14:07 +08:00
|
|
|
dso->binary_type = DSO_BINARY_TYPE__KCORE;
|
2013-12-11 02:08:44 +08:00
|
|
|
dso__set_long_name(dso, strdup(kcore_filename), true);
|
2013-08-07 19:38:51 +08:00
|
|
|
|
|
|
|
close(fd);
|
|
|
|
|
2018-04-27 03:11:47 +08:00
|
|
|
if (map->prot & PROT_EXEC)
|
2013-08-07 19:38:51 +08:00
|
|
|
pr_debug("Using %s for kernel object code\n", kcore_filename);
|
|
|
|
else
|
|
|
|
pr_debug("Using %s for kernel data\n", kcore_filename);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
out_err:
|
|
|
|
while (!list_empty(&md.maps)) {
|
|
|
|
map = list_entry(md.maps.next, struct map, node);
|
2015-05-26 02:30:09 +08:00
|
|
|
list_del_init(&map->node);
|
2015-05-26 03:59:56 +08:00
|
|
|
map__put(map);
|
2013-08-07 19:38:51 +08:00
|
|
|
}
|
|
|
|
close(fd);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2014-01-29 22:14:43 +08:00
|
|
|
/*
|
|
|
|
* If the kernel is relocated at boot time, kallsyms won't match. Compute the
|
|
|
|
* delta based on the relocation reference symbol.
|
|
|
|
*/
|
2018-04-28 02:47:13 +08:00
|
|
|
static int kallsyms__delta(struct kmap *kmap, const char *filename, u64 *delta)
|
2014-01-29 22:14:43 +08:00
|
|
|
{
|
|
|
|
u64 addr;
|
|
|
|
|
|
|
|
if (!kmap->ref_reloc_sym || !kmap->ref_reloc_sym->name)
|
|
|
|
return 0;
|
|
|
|
|
perf symbols: Accept symbols starting at address 0
That is the case of _text on s390, and we have some functions that return an
address, using address zero to report problems, oops.
This would lead the symbol loading routines to not use "_text" as the reference
relocation symbol, or the first symbol for the kernel, but use instead
"_stext", that is at the same address on x86_64 and others, but not on s390:
[acme@localhost perf-4.11.0-rc6]$ head -15 /proc/kallsyms
0000000000000000 T _text
0000000000000418 t iplstart
0000000000000800 T start
000000000000080a t .base
000000000000082e t .sk8x8
0000000000000834 t .gotr
0000000000000842 t .cmd
0000000000000846 t .parm
000000000000084a t .lowcase
0000000000010000 T startup
0000000000010010 T startup_kdump
0000000000010214 t startup_kdump_relocated
0000000000011000 T startup_continue
00000000000112a0 T _ehead
0000000000100000 T _stext
[acme@localhost perf-4.11.0-rc6]$
Which in turn would make 'perf test vmlinux' to fail because it wouldn't find
the symbols before "_stext" in kallsyms.
Fix it by using the return value only for errors and storing the
address, when the symbol is successfully found, in a provided pointer
arg.
Before this patch:
After:
[acme@localhost perf-4.11.0-rc6]$ tools/perf/perf test -v 1
1: vmlinux symtab matches kallsyms :
--- start ---
test child forked, pid 40693
Looking at the vmlinux_path (8 entries long)
Using /usr/lib/debug/lib/modules/3.10.0-654.el7.s390x/vmlinux for symbols
ERR : 0: _text not on kallsyms
ERR : 0x418: iplstart not on kallsyms
ERR : 0x800: start not on kallsyms
ERR : 0x80a: .base not on kallsyms
ERR : 0x82e: .sk8x8 not on kallsyms
ERR : 0x834: .gotr not on kallsyms
ERR : 0x842: .cmd not on kallsyms
ERR : 0x846: .parm not on kallsyms
ERR : 0x84a: .lowcase not on kallsyms
ERR : 0x10000: startup not on kallsyms
ERR : 0x10010: startup_kdump not on kallsyms
ERR : 0x10214: startup_kdump_relocated not on kallsyms
ERR : 0x11000: startup_continue not on kallsyms
ERR : 0x112a0: _ehead not on kallsyms
<SNIP warnings>
test child finished with -1
---- end ----
vmlinux symtab matches kallsyms: FAILED!
[acme@localhost perf-4.11.0-rc6]$
After:
[acme@localhost perf-4.11.0-rc6]$ tools/perf/perf test -v 1
1: vmlinux symtab matches kallsyms :
--- start ---
test child forked, pid 47160
<SNIP warnings>
test child finished with 0
---- end ----
vmlinux symtab matches kallsyms: Ok
[acme@localhost perf-4.11.0-rc6]$
Reported-by: Michael Petlan <mpetlan@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-9x9bwgd3btwdk1u51xie93fz@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-04-28 08:21:09 +08:00
|
|
|
if (kallsyms__get_function_start(filename, kmap->ref_reloc_sym->name, &addr))
|
2014-01-29 22:14:43 +08:00
|
|
|
return -1;
|
|
|
|
|
|
|
|
*delta = addr - kmap->ref_reloc_sym->addr;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-04-19 23:12:49 +08:00
|
|
|
int __dso__load_kallsyms(struct dso *dso, const char *filename,
|
2016-09-02 06:25:52 +08:00
|
|
|
struct map *map, bool no_kcore)
|
2009-10-08 00:48:56 +08:00
|
|
|
{
|
2018-04-28 02:47:13 +08:00
|
|
|
struct kmap *kmap = map__kmap(map);
|
2014-01-29 22:14:43 +08:00
|
|
|
u64 delta = 0;
|
|
|
|
|
perf symbols: Handle /proc/sys/kernel/kptr_restrict
Perf uses /proc/modules to figure out where kernel modules are loaded.
With the advent of kptr_restrict, non root users get zeroes for all module
start addresses.
So check if kptr_restrict is non zero and don't generate the syntethic
PERF_RECORD_MMAP events for them.
Warn the user about it in perf record and in perf report.
In perf report the reference relocation symbol being zero means that
kptr_restrict was set, thus /proc/kallsyms has only zeroed addresses, so don't
use it to fixup symbol addresses when using a valid kallsyms (in the buildid
cache) or vmlinux (in the vmlinux path) build-id located automatically or
specified by the user.
Provide an explanation about it in 'perf report' if kernel samples were taken,
checking if a suitable vmlinux or kallsyms was found/specified.
Restricted /proc/kallsyms don't go to the buildid cache anymore.
Example:
[acme@emilia ~]$ perf record -F 100000 sleep 1
WARNING: Kernel address maps (/proc/{kallsyms,modules}) are restricted, check
/proc/sys/kernel/kptr_restrict.
Samples in kernel functions may not be resolved if a suitable vmlinux file is
not found in the buildid cache or in the vmlinux path.
Samples in kernel modules won't be resolved at all.
If some relocation was applied (e.g. kexec) symbols may be misresolved even
with a suitable vmlinux or kallsyms file.
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.005 MB perf.data (~231 samples) ]
[acme@emilia ~]$
[acme@emilia ~]$ perf report --stdio
Kernel address maps (/proc/{kallsyms,modules}) were restricted,
check /proc/sys/kernel/kptr_restrict before running 'perf record'.
If some relocation was applied (e.g. kexec) symbols may be misresolved.
Samples in kernel modules can't be resolved as well.
# Events: 13 cycles
#
# Overhead Command Shared Object Symbol
# ........ ....... ................. .....................
#
20.24% sleep [kernel.kallsyms] [k] page_fault
20.04% sleep [kernel.kallsyms] [k] filemap_fault
19.78% sleep [kernel.kallsyms] [k] __lru_cache_add
19.69% sleep ld-2.12.so [.] memcpy
14.71% sleep [kernel.kallsyms] [k] dput
4.70% sleep [kernel.kallsyms] [k] flush_signal_handlers
0.73% sleep [kernel.kallsyms] [k] perf_event_comm
0.11% sleep [kernel.kallsyms] [k] native_write_msr_safe
#
# (For a higher level overview, try: perf report --sort comm,dso)
#
[acme@emilia ~]$
This is because it found a suitable vmlinux (build-id checked) in
/lib/modules/2.6.39-rc7+/build/vmlinux (use -v in perf report to see the long
file name).
If we remove that file from the vmlinux path:
[root@emilia ~]# mv /lib/modules/2.6.39-rc7+/build/vmlinux \
/lib/modules/2.6.39-rc7+/build/vmlinux.OFF
[acme@emilia ~]$ perf report --stdio
[kernel.kallsyms] with build id 57298cdbe0131f6871667ec0eaab4804dcf6f562
not found, continuing without symbols
Kernel address maps (/proc/{kallsyms,modules}) were restricted, check
/proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples can't be
resolved.
Samples in kernel modules can't be resolved as well.
# Events: 13 cycles
#
# Overhead Command Shared Object Symbol
# ........ ....... ................. ......
#
80.31% sleep [kernel.kallsyms] [k] 0xffffffff8103425a
19.69% sleep ld-2.12.so [.] memcpy
#
# (For a higher level overview, try: perf report --sort comm,dso)
#
[acme@emilia ~]$
Reported-by: Stephane Eranian <eranian@google.com>
Suggested-by: David Miller <davem@davemloft.net>
Cc: Dave Jones <davej@redhat.com>
Cc: David Miller <davem@davemloft.net>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Kees Cook <kees.cook@canonical.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Tom Zanussi <tzanussi@gmail.com>
Link: http://lkml.kernel.org/n/tip-mt512joaxxbhhp1odop04yit@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2011-05-26 20:53:51 +08:00
|
|
|
if (symbol__restricted_filename(filename, "/proc/kallsyms"))
|
|
|
|
return -1;
|
|
|
|
|
2018-04-28 02:47:13 +08:00
|
|
|
if (!kmap || !kmap->kmaps)
|
|
|
|
return -1;
|
|
|
|
|
2018-04-28 02:36:15 +08:00
|
|
|
if (dso__load_all_kallsyms(dso, filename) < 0)
|
2009-10-08 00:48:56 +08:00
|
|
|
return -1;
|
|
|
|
|
2018-04-28 02:47:13 +08:00
|
|
|
if (kallsyms__delta(kmap, filename, &delta))
|
2014-01-29 22:14:43 +08:00
|
|
|
return -1;
|
|
|
|
|
2022-04-16 08:40:46 +08:00
|
|
|
symbols__fixup_end(&dso->symbols, true);
|
2018-04-27 03:52:34 +08:00
|
|
|
symbols__fixup_duplicate(&dso->symbols);
|
2011-08-24 14:40:15 +08:00
|
|
|
|
2020-08-08 20:21:54 +08:00
|
|
|
if (dso->kernel == DSO_SPACE__KERNEL_GUEST)
|
2012-07-22 20:14:32 +08:00
|
|
|
dso->symtab_type = DSO_BINARY_TYPE__GUEST_KALLSYMS;
|
2010-04-19 13:32:50 +08:00
|
|
|
else
|
2012-07-22 20:14:32 +08:00
|
|
|
dso->symtab_type = DSO_BINARY_TYPE__KALLSYMS;
|
2009-10-08 00:48:56 +08:00
|
|
|
|
2016-04-19 23:12:49 +08:00
|
|
|
if (!no_kcore && !dso__load_kcore(dso, map, filename))
|
2019-11-26 08:58:33 +08:00
|
|
|
return maps__split_kallsyms_for_kcore(kmap->kmaps, dso);
|
2013-08-07 19:38:51 +08:00
|
|
|
else
|
2019-11-26 08:58:33 +08:00
|
|
|
return maps__split_kallsyms(kmap->kmaps, dso, delta, map);
|
2009-10-06 01:26:17 +08:00
|
|
|
}
|
|
|
|
|
2016-04-19 23:12:49 +08:00
|
|
|
int dso__load_kallsyms(struct dso *dso, const char *filename,
|
2016-09-02 06:25:52 +08:00
|
|
|
struct map *map)
|
2016-04-19 23:12:49 +08:00
|
|
|
{
|
2016-09-02 06:25:52 +08:00
|
|
|
return __dso__load_kallsyms(dso, filename, map, false);
|
2016-04-19 23:12:49 +08:00
|
|
|
}
|
|
|
|
|
2018-04-27 03:52:34 +08:00
|
|
|
static int dso__load_perf_map(const char *map_path, struct dso *dso)
|
2009-06-09 02:12:48 +08:00
|
|
|
{
|
|
|
|
char *line = NULL;
|
|
|
|
size_t n;
|
|
|
|
FILE *file;
|
|
|
|
int nr_syms = 0;
|
|
|
|
|
2017-07-06 09:48:09 +08:00
|
|
|
file = fopen(map_path, "r");
|
2009-06-09 02:12:48 +08:00
|
|
|
if (file == NULL)
|
|
|
|
goto out_failure;
|
|
|
|
|
|
|
|
while (!feof(file)) {
|
perf_counter tools: Define and use our own u64, s64 etc. definitions
On 64-bit powerpc, __u64 is defined to be unsigned long rather than
unsigned long long. This causes compiler warnings every time we
print a __u64 value with %Lx.
Rather than changing __u64, we define our own u64 to be unsigned long
long on all architectures, and similarly s64 as signed long long.
For consistency we also define u32, s32, u16, s16, u8 and s8. These
definitions are put in a new header, types.h, because these definitions
are needed in util/string.h and util/symbol.h.
The main change here is the mechanical change of __[us]{64,32,16,8}
to remove the "__". The other changes are:
* Create types.h
* Include types.h in perf.h, util/string.h and util/symbol.h
* Add types.h to the LIB_H definition in Makefile
* Added (u64) casts in process_overflow_event() and print_sym_table()
to kill two remaining warnings.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: benh@kernel.crashing.org
LKML-Reference: <19003.33494.495844.956580@cargo.ozlabs.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-06-19 20:21:42 +08:00
|
|
|
u64 start, size;
|
2009-06-09 02:12:48 +08:00
|
|
|
struct symbol *sym;
|
|
|
|
int line_len, len;
|
|
|
|
|
|
|
|
line_len = getline(&line, &n, file);
|
|
|
|
if (line_len < 0)
|
|
|
|
break;
|
|
|
|
|
|
|
|
if (!line)
|
|
|
|
goto out_failure;
|
|
|
|
|
|
|
|
line[--line_len] = '\0'; /* \n */
|
|
|
|
|
|
|
|
len = hex2u64(line, &start);
|
|
|
|
|
|
|
|
len++;
|
|
|
|
if (len + 2 >= line_len)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
len += hex2u64(line + len, &size);
|
|
|
|
|
|
|
|
len++;
|
|
|
|
if (len + 2 >= line_len)
|
|
|
|
continue;
|
|
|
|
|
2018-04-26 22:09:10 +08:00
|
|
|
sym = symbol__new(start, size, STB_GLOBAL, STT_FUNC, line + len);
|
2009-06-09 02:12:48 +08:00
|
|
|
|
|
|
|
if (sym == NULL)
|
|
|
|
goto out_delete_line;
|
|
|
|
|
2018-04-27 03:52:34 +08:00
|
|
|
symbols__insert(&dso->symbols, sym);
|
2016-09-02 06:25:52 +08:00
|
|
|
nr_syms++;
|
2009-06-09 02:12:48 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
free(line);
|
|
|
|
fclose(file);
|
|
|
|
|
|
|
|
return nr_syms;
|
|
|
|
|
|
|
|
out_delete_line:
|
|
|
|
free(line);
|
|
|
|
out_failure:
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2020-08-22 00:52:37 +08:00
|
|
|
#ifdef HAVE_LIBBFD_SUPPORT
|
|
|
|
#define PACKAGE 'perf'
|
|
|
|
#include <bfd.h>
|
|
|
|
|
|
|
|
static int bfd_symbols__cmpvalue(const void *a, const void *b)
|
|
|
|
{
|
|
|
|
const asymbol *as = *(const asymbol **)a, *bs = *(const asymbol **)b;
|
|
|
|
|
|
|
|
if (bfd_asymbol_value(as) != bfd_asymbol_value(bs))
|
|
|
|
return bfd_asymbol_value(as) - bfd_asymbol_value(bs);
|
|
|
|
|
|
|
|
return bfd_asymbol_name(as)[0] - bfd_asymbol_name(bs)[0];
|
|
|
|
}
|
|
|
|
|
|
|
|
static int bfd2elf_binding(asymbol *symbol)
|
|
|
|
{
|
|
|
|
if (symbol->flags & BSF_WEAK)
|
|
|
|
return STB_WEAK;
|
|
|
|
if (symbol->flags & BSF_GLOBAL)
|
|
|
|
return STB_GLOBAL;
|
|
|
|
if (symbol->flags & BSF_LOCAL)
|
|
|
|
return STB_LOCAL;
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
int dso__load_bfd_symbols(struct dso *dso, const char *debugfile)
|
|
|
|
{
|
|
|
|
int err = -1;
|
2021-02-09 22:51:48 +08:00
|
|
|
long symbols_size, symbols_count, i;
|
2020-08-22 00:52:37 +08:00
|
|
|
asection *section;
|
|
|
|
asymbol **symbols, *sym;
|
|
|
|
struct symbol *symbol;
|
|
|
|
bfd *abfd;
|
|
|
|
u64 start, len;
|
|
|
|
|
2021-02-11 03:17:38 +08:00
|
|
|
abfd = bfd_openr(debugfile, NULL);
|
2020-08-22 00:52:37 +08:00
|
|
|
if (!abfd)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
if (!bfd_check_format(abfd, bfd_object)) {
|
|
|
|
pr_debug2("%s: cannot read %s bfd file.\n", __func__,
|
|
|
|
dso->long_name);
|
|
|
|
goto out_close;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (bfd_get_flavour(abfd) == bfd_target_elf_flavour)
|
|
|
|
goto out_close;
|
|
|
|
|
|
|
|
symbols_size = bfd_get_symtab_upper_bound(abfd);
|
|
|
|
if (symbols_size == 0) {
|
|
|
|
bfd_close(abfd);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (symbols_size < 0)
|
|
|
|
goto out_close;
|
|
|
|
|
|
|
|
symbols = malloc(symbols_size);
|
|
|
|
if (!symbols)
|
|
|
|
goto out_close;
|
|
|
|
|
|
|
|
symbols_count = bfd_canonicalize_symtab(abfd, symbols);
|
|
|
|
if (symbols_count < 0)
|
|
|
|
goto out_free;
|
|
|
|
|
2021-09-10 03:26:36 +08:00
|
|
|
section = bfd_get_section_by_name(abfd, ".text");
|
|
|
|
if (section) {
|
|
|
|
for (i = 0; i < symbols_count; ++i) {
|
|
|
|
if (!strcmp(bfd_asymbol_name(symbols[i]), "__ImageBase") ||
|
|
|
|
!strcmp(bfd_asymbol_name(symbols[i]), "__image_base__"))
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
if (i < symbols_count) {
|
|
|
|
/* PE symbols can only have 4 bytes, so use .text high bits */
|
|
|
|
dso->text_offset = section->vma - (u32)section->vma;
|
|
|
|
dso->text_offset += (u32)bfd_asymbol_value(symbols[i]);
|
|
|
|
} else {
|
|
|
|
dso->text_offset = section->vma - section->filepos;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-08-22 00:52:37 +08:00
|
|
|
qsort(symbols, symbols_count, sizeof(asymbol *), bfd_symbols__cmpvalue);
|
|
|
|
|
|
|
|
#ifdef bfd_get_section
|
|
|
|
#define bfd_asymbol_section bfd_get_section
|
|
|
|
#endif
|
|
|
|
for (i = 0; i < symbols_count; ++i) {
|
|
|
|
sym = symbols[i];
|
|
|
|
section = bfd_asymbol_section(sym);
|
|
|
|
if (bfd2elf_binding(sym) < 0)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
while (i + 1 < symbols_count &&
|
|
|
|
bfd_asymbol_section(symbols[i + 1]) == section &&
|
|
|
|
bfd2elf_binding(symbols[i + 1]) < 0)
|
|
|
|
i++;
|
|
|
|
|
|
|
|
if (i + 1 < symbols_count &&
|
|
|
|
bfd_asymbol_section(symbols[i + 1]) == section)
|
|
|
|
len = symbols[i + 1]->value - sym->value;
|
|
|
|
else
|
|
|
|
len = section->size - sym->value;
|
|
|
|
|
|
|
|
start = bfd_asymbol_value(sym) - dso->text_offset;
|
|
|
|
symbol = symbol__new(start, len, bfd2elf_binding(sym), STT_FUNC,
|
|
|
|
bfd_asymbol_name(sym));
|
|
|
|
if (!symbol)
|
|
|
|
goto out_free;
|
|
|
|
|
|
|
|
symbols__insert(&dso->symbols, symbol);
|
|
|
|
}
|
|
|
|
#ifdef bfd_get_section
|
|
|
|
#undef bfd_asymbol_section
|
|
|
|
#endif
|
|
|
|
|
2022-04-16 08:40:46 +08:00
|
|
|
symbols__fixup_end(&dso->symbols, false);
|
2020-08-22 00:52:37 +08:00
|
|
|
symbols__fixup_duplicate(&dso->symbols);
|
|
|
|
dso->adjust_symbols = 1;
|
|
|
|
|
|
|
|
err = 0;
|
|
|
|
out_free:
|
|
|
|
free(symbols);
|
|
|
|
out_close:
|
|
|
|
bfd_close(abfd);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2014-02-20 09:32:56 +08:00
|
|
|
static bool dso__is_compatible_symtab_type(struct dso *dso, bool kmod,
|
|
|
|
enum dso_binary_type type)
|
|
|
|
{
|
|
|
|
switch (type) {
|
|
|
|
case DSO_BINARY_TYPE__JAVA_JIT:
|
|
|
|
case DSO_BINARY_TYPE__DEBUGLINK:
|
|
|
|
case DSO_BINARY_TYPE__SYSTEM_PATH_DSO:
|
|
|
|
case DSO_BINARY_TYPE__FEDORA_DEBUGINFO:
|
|
|
|
case DSO_BINARY_TYPE__UBUNTU_DEBUGINFO:
|
2020-05-26 23:52:07 +08:00
|
|
|
case DSO_BINARY_TYPE__MIXEDUP_UBUNTU_DEBUGINFO:
|
2014-02-20 09:32:56 +08:00
|
|
|
case DSO_BINARY_TYPE__BUILDID_DEBUGINFO:
|
|
|
|
case DSO_BINARY_TYPE__OPENEMBEDDED_DEBUGINFO:
|
2020-08-08 20:21:54 +08:00
|
|
|
return !kmod && dso->kernel == DSO_SPACE__USER;
|
2014-02-20 09:32:56 +08:00
|
|
|
|
|
|
|
case DSO_BINARY_TYPE__KALLSYMS:
|
|
|
|
case DSO_BINARY_TYPE__VMLINUX:
|
|
|
|
case DSO_BINARY_TYPE__KCORE:
|
2020-08-08 20:21:54 +08:00
|
|
|
return dso->kernel == DSO_SPACE__KERNEL;
|
2014-02-20 09:32:56 +08:00
|
|
|
|
|
|
|
case DSO_BINARY_TYPE__GUEST_KALLSYMS:
|
|
|
|
case DSO_BINARY_TYPE__GUEST_VMLINUX:
|
|
|
|
case DSO_BINARY_TYPE__GUEST_KCORE:
|
2020-08-08 20:21:54 +08:00
|
|
|
return dso->kernel == DSO_SPACE__KERNEL_GUEST;
|
2014-02-20 09:32:56 +08:00
|
|
|
|
|
|
|
case DSO_BINARY_TYPE__GUEST_KMODULE:
|
2014-11-04 09:14:27 +08:00
|
|
|
case DSO_BINARY_TYPE__GUEST_KMODULE_COMP:
|
2014-02-20 09:32:56 +08:00
|
|
|
case DSO_BINARY_TYPE__SYSTEM_PATH_KMODULE:
|
2014-11-04 09:14:27 +08:00
|
|
|
case DSO_BINARY_TYPE__SYSTEM_PATH_KMODULE_COMP:
|
2014-02-20 09:32:56 +08:00
|
|
|
/*
|
|
|
|
* kernel modules know their symtab type - it's set when
|
2019-11-14 23:28:41 +08:00
|
|
|
* creating a module dso in machine__addnew_module_map().
|
2014-02-20 09:32:56 +08:00
|
|
|
*/
|
|
|
|
return kmod && dso->symtab_type == type;
|
|
|
|
|
|
|
|
case DSO_BINARY_TYPE__BUILD_ID_CACHE:
|
2017-07-06 09:48:13 +08:00
|
|
|
case DSO_BINARY_TYPE__BUILD_ID_CACHE_DEBUGINFO:
|
2014-02-20 09:32:56 +08:00
|
|
|
return true;
|
|
|
|
|
2019-03-12 13:30:48 +08:00
|
|
|
case DSO_BINARY_TYPE__BPF_PROG_INFO:
|
2020-03-13 03:56:10 +08:00
|
|
|
case DSO_BINARY_TYPE__BPF_IMAGE:
|
2020-05-12 20:19:19 +08:00
|
|
|
case DSO_BINARY_TYPE__OOL:
|
2014-02-20 09:32:56 +08:00
|
|
|
case DSO_BINARY_TYPE__NOT_FOUND:
|
|
|
|
default:
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-07-06 09:48:09 +08:00
|
|
|
/* Checks for the existence of the perf-<pid>.map file in two different
|
|
|
|
* locations. First, if the process is a separate mount namespace, check in
|
|
|
|
* that namespace using the pid of the innermost pid namespace. If's not in a
|
|
|
|
* namespace, or the file can't be found there, try in the mount namespace of
|
|
|
|
* the tracing process using our view of its pid.
|
|
|
|
*/
|
|
|
|
static int dso__find_perf_map(char *filebuf, size_t bufsz,
|
|
|
|
struct nsinfo **nsip)
|
|
|
|
{
|
|
|
|
struct nscookie nsc;
|
|
|
|
struct nsinfo *nsi;
|
|
|
|
struct nsinfo *nnsi;
|
|
|
|
int rc = -1;
|
|
|
|
|
|
|
|
nsi = *nsip;
|
|
|
|
|
2022-02-11 18:34:06 +08:00
|
|
|
if (nsinfo__need_setns(nsi)) {
|
|
|
|
snprintf(filebuf, bufsz, "/tmp/perf-%d.map", nsinfo__nstgid(nsi));
|
2017-07-06 09:48:09 +08:00
|
|
|
nsinfo__mountns_enter(nsi, &nsc);
|
|
|
|
rc = access(filebuf, R_OK);
|
|
|
|
nsinfo__mountns_exit(&nsc);
|
|
|
|
if (rc == 0)
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
|
|
|
nnsi = nsinfo__copy(nsi);
|
|
|
|
if (nnsi) {
|
|
|
|
nsinfo__put(nsi);
|
|
|
|
|
2022-02-11 18:34:06 +08:00
|
|
|
nsinfo__clear_need_setns(nnsi);
|
|
|
|
snprintf(filebuf, bufsz, "/tmp/perf-%d.map", nsinfo__tgid(nnsi));
|
2017-07-06 09:48:09 +08:00
|
|
|
*nsip = nnsi;
|
|
|
|
rc = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2016-09-02 06:25:52 +08:00
|
|
|
int dso__load(struct dso *dso, struct map *map)
|
2009-05-29 01:55:04 +08:00
|
|
|
{
|
2009-11-21 06:51:27 +08:00
|
|
|
char *name;
|
2009-05-29 01:55:04 +08:00
|
|
|
int ret = -1;
|
2012-07-22 20:14:32 +08:00
|
|
|
u_int i;
|
2019-11-05 03:25:11 +08:00
|
|
|
struct machine *machine = NULL;
|
2012-07-22 20:14:32 +08:00
|
|
|
char *root_dir = (char *) "";
|
2012-08-11 06:23:02 +08:00
|
|
|
int ss_pos = 0;
|
|
|
|
struct symsrc ss_[2];
|
|
|
|
struct symsrc *syms_ss = NULL, *runtime_ss = NULL;
|
2014-02-20 09:32:56 +08:00
|
|
|
bool kmod;
|
2017-07-06 09:48:09 +08:00
|
|
|
bool perfmap;
|
2020-10-14 03:24:34 +08:00
|
|
|
struct build_id bid;
|
2017-07-06 09:48:08 +08:00
|
|
|
struct nscookie nsc;
|
2017-07-06 09:48:09 +08:00
|
|
|
char newmapname[PATH_MAX];
|
|
|
|
const char *map_path = dso->long_name;
|
|
|
|
|
2022-08-27 00:42:38 +08:00
|
|
|
mutex_lock(&dso->lock);
|
2017-07-06 09:48:09 +08:00
|
|
|
perfmap = strncmp(dso->name, "/tmp/perf-", 10) == 0;
|
|
|
|
if (perfmap) {
|
|
|
|
if (dso->nsinfo && (dso__find_perf_map(newmapname,
|
|
|
|
sizeof(newmapname), &dso->nsinfo) == 0)) {
|
|
|
|
map_path = newmapname;
|
|
|
|
}
|
|
|
|
}
|
2009-05-29 01:55:04 +08:00
|
|
|
|
2017-07-06 09:48:08 +08:00
|
|
|
nsinfo__mountns_enter(dso->nsinfo, &nsc);
|
2015-05-18 08:30:40 +08:00
|
|
|
|
|
|
|
/* check again under the dso->lock */
|
2018-04-27 03:52:34 +08:00
|
|
|
if (dso__loaded(dso)) {
|
2015-05-18 08:30:40 +08:00
|
|
|
ret = 1;
|
|
|
|
goto out;
|
|
|
|
}
|
2009-10-29 07:51:21 +08:00
|
|
|
|
2020-03-03 03:03:34 +08:00
|
|
|
kmod = dso->symtab_type == DSO_BINARY_TYPE__SYSTEM_PATH_KMODULE ||
|
|
|
|
dso->symtab_type == DSO_BINARY_TYPE__SYSTEM_PATH_KMODULE_COMP ||
|
|
|
|
dso->symtab_type == DSO_BINARY_TYPE__GUEST_KMODULE ||
|
|
|
|
dso->symtab_type == DSO_BINARY_TYPE__GUEST_KMODULE_COMP;
|
|
|
|
|
|
|
|
if (dso->kernel && !kmod) {
|
2020-08-08 20:21:54 +08:00
|
|
|
if (dso->kernel == DSO_SPACE__KERNEL)
|
2016-09-02 06:25:52 +08:00
|
|
|
ret = dso__load_kernel_sym(dso, map);
|
2020-08-08 20:21:54 +08:00
|
|
|
else if (dso->kernel == DSO_SPACE__KERNEL_GUEST)
|
2016-09-02 06:25:52 +08:00
|
|
|
ret = dso__load_guest_kernel_sym(dso, map);
|
2015-05-18 08:30:40 +08:00
|
|
|
|
2019-11-05 03:25:11 +08:00
|
|
|
machine = map__kmaps(map)->machine;
|
2018-05-22 18:54:33 +08:00
|
|
|
if (machine__is(machine, "x86_64"))
|
|
|
|
machine__map_x86_64_entry_trampolines(machine, dso);
|
2015-05-18 08:30:40 +08:00
|
|
|
goto out;
|
|
|
|
}
|
2010-04-19 13:32:50 +08:00
|
|
|
|
2011-03-31 21:56:28 +08:00
|
|
|
dso->adjust_symbols = 0;
|
2009-06-30 22:43:17 +08:00
|
|
|
|
2017-07-06 09:48:09 +08:00
|
|
|
if (perfmap) {
|
2018-04-27 03:52:34 +08:00
|
|
|
ret = dso__load_perf_map(map_path, dso);
|
2012-07-22 20:14:32 +08:00
|
|
|
dso->symtab_type = ret > 0 ? DSO_BINARY_TYPE__JAVA_JIT :
|
|
|
|
DSO_BINARY_TYPE__NOT_FOUND;
|
2015-05-18 08:30:40 +08:00
|
|
|
goto out;
|
2009-08-07 01:43:17 +08:00
|
|
|
}
|
|
|
|
|
2012-07-22 20:14:32 +08:00
|
|
|
if (machine)
|
|
|
|
root_dir = machine->root_dir;
|
|
|
|
|
2013-01-15 01:46:47 +08:00
|
|
|
name = malloc(PATH_MAX);
|
|
|
|
if (!name)
|
2015-05-18 08:30:40 +08:00
|
|
|
goto out;
|
2013-01-15 01:46:47 +08:00
|
|
|
|
2015-09-08 08:34:19 +08:00
|
|
|
/*
|
|
|
|
* Read the build id if possible. This is required for
|
|
|
|
* DSO_BINARY_TYPE__BUILDID_DEBUGINFO to work
|
|
|
|
*/
|
2016-12-13 23:29:44 +08:00
|
|
|
if (!dso->has_build_id &&
|
2017-02-07 07:48:28 +08:00
|
|
|
is_regular_file(dso->long_name)) {
|
|
|
|
__symbol__join_symfs(name, PATH_MAX, dso->long_name);
|
2020-10-14 03:24:34 +08:00
|
|
|
if (filename__read_build_id(name, &bid) > 0)
|
2020-10-14 03:24:37 +08:00
|
|
|
dso__set_build_id(dso, &bid);
|
2017-02-07 07:48:28 +08:00
|
|
|
}
|
2015-09-08 08:34:19 +08:00
|
|
|
|
2014-02-20 09:32:56 +08:00
|
|
|
/*
|
|
|
|
* Iterate over candidate debug images.
|
2012-08-11 06:23:02 +08:00
|
|
|
* Keep track of "interesting" ones (those which have a symtab, dynsym,
|
|
|
|
* and/or opd section) for processing.
|
2010-07-30 20:50:09 +08:00
|
|
|
*/
|
2012-07-22 20:14:32 +08:00
|
|
|
for (i = 0; i < DSO_BINARY_TYPE__SYMTAB_CNT; i++) {
|
2012-08-11 06:23:02 +08:00
|
|
|
struct symsrc *ss = &ss_[ss_pos];
|
|
|
|
bool next_slot = false;
|
2017-07-06 09:48:11 +08:00
|
|
|
bool is_reg;
|
2017-07-06 09:48:13 +08:00
|
|
|
bool nsexit;
|
2020-08-22 00:52:37 +08:00
|
|
|
int bfdrc = -1;
|
2018-02-15 20:26:28 +08:00
|
|
|
int sirc = -1;
|
2010-07-30 20:50:09 +08:00
|
|
|
|
2012-08-11 06:22:58 +08:00
|
|
|
enum dso_binary_type symtab_type = binary_type_symtab[i];
|
2010-12-10 04:27:07 +08:00
|
|
|
|
2017-07-06 09:48:13 +08:00
|
|
|
nsexit = (symtab_type == DSO_BINARY_TYPE__BUILD_ID_CACHE ||
|
|
|
|
symtab_type == DSO_BINARY_TYPE__BUILD_ID_CACHE_DEBUGINFO);
|
|
|
|
|
2014-02-20 09:32:56 +08:00
|
|
|
if (!dso__is_compatible_symtab_type(dso, kmod, symtab_type))
|
|
|
|
continue;
|
|
|
|
|
2013-12-17 04:03:18 +08:00
|
|
|
if (dso__read_binary_type_filename(dso, symtab_type,
|
|
|
|
root_dir, name, PATH_MAX))
|
2012-07-22 20:14:32 +08:00
|
|
|
continue;
|
2010-07-30 20:50:09 +08:00
|
|
|
|
2017-07-06 09:48:13 +08:00
|
|
|
if (nsexit)
|
2017-07-06 09:48:11 +08:00
|
|
|
nsinfo__mountns_exit(&nsc);
|
|
|
|
|
|
|
|
is_reg = is_regular_file(name);
|
2022-02-02 15:08:26 +08:00
|
|
|
if (!is_reg && errno == ENOENT && dso->nsinfo) {
|
|
|
|
char *new_name = filename_with_chroot(dso->nsinfo->pid,
|
|
|
|
name);
|
|
|
|
if (new_name) {
|
|
|
|
is_reg = is_regular_file(new_name);
|
|
|
|
strlcpy(name, new_name, PATH_MAX);
|
|
|
|
free(new_name);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-08-22 00:52:37 +08:00
|
|
|
#ifdef HAVE_LIBBFD_SUPPORT
|
2018-02-15 20:26:28 +08:00
|
|
|
if (is_reg)
|
2020-08-22 00:52:37 +08:00
|
|
|
bfdrc = dso__load_bfd_symbols(dso, name);
|
|
|
|
#endif
|
|
|
|
if (is_reg && bfdrc < 0)
|
2018-02-15 20:26:28 +08:00
|
|
|
sirc = symsrc__init(ss, dso, name, symtab_type);
|
2016-01-20 19:56:32 +08:00
|
|
|
|
2017-07-06 09:48:13 +08:00
|
|
|
if (nsexit)
|
2017-07-06 09:48:11 +08:00
|
|
|
nsinfo__mountns_enter(dso->nsinfo, &nsc);
|
|
|
|
|
2021-02-11 03:18:02 +08:00
|
|
|
if (bfdrc == 0) {
|
|
|
|
ret = 0;
|
2020-08-22 00:52:37 +08:00
|
|
|
break;
|
2021-02-11 03:18:02 +08:00
|
|
|
}
|
2020-08-22 00:52:37 +08:00
|
|
|
|
2018-02-15 20:26:28 +08:00
|
|
|
if (!is_reg || sirc < 0)
|
2010-07-30 20:50:09 +08:00
|
|
|
continue;
|
2009-05-29 01:55:04 +08:00
|
|
|
|
2012-08-11 06:23:02 +08:00
|
|
|
if (!syms_ss && symsrc__has_symtab(ss)) {
|
|
|
|
syms_ss = ss;
|
|
|
|
next_slot = true;
|
2013-12-03 15:23:08 +08:00
|
|
|
if (!dso->symsrc_filename)
|
|
|
|
dso->symsrc_filename = strdup(name);
|
2012-08-11 06:23:00 +08:00
|
|
|
}
|
|
|
|
|
2012-08-11 06:23:02 +08:00
|
|
|
if (!runtime_ss && symsrc__possibly_runtime(ss)) {
|
|
|
|
runtime_ss = ss;
|
|
|
|
next_slot = true;
|
2012-08-11 06:22:59 +08:00
|
|
|
}
|
2009-05-29 01:55:04 +08:00
|
|
|
|
2012-08-11 06:23:02 +08:00
|
|
|
if (next_slot) {
|
|
|
|
ss_pos++;
|
2012-04-18 21:46:58 +08:00
|
|
|
|
2012-08-11 06:23:02 +08:00
|
|
|
if (syms_ss && runtime_ss)
|
|
|
|
break;
|
2014-02-20 09:32:54 +08:00
|
|
|
} else {
|
|
|
|
symsrc__destroy(ss);
|
2010-07-30 20:50:09 +08:00
|
|
|
}
|
2012-08-11 06:23:02 +08:00
|
|
|
|
perf_counter tools: PLT info is stripped in -debuginfo packages
So we need to get the richer .symtab from the debuginfo
packages but the PLT info from the original DSO where we have
just the leaner .dynsym symtab.
Example:
| [acme@doppio pahole]$ perf report --sort comm,dso,symbol > before
| [acme@doppio pahole]$ perf report --sort comm,dso,symbol > after
| [acme@doppio pahole]$ diff -U1 before after
| --- before 2009-07-11 11:04:22.688595741 -0300
| +++ after 2009-07-11 11:04:33.380595676 -0300
| @@ -80,3 +80,2 @@
| 0.07% pahole ./build/pahole [.] pahole_stealer
| - 0.06% pahole /usr/lib64/libdw-0.141.so [.] 0x00000000007140
| 0.06% pahole /usr/lib64/libdw-0.141.so [.] __libdw_getabbrev
| @@ -91,2 +90,3 @@
| 0.06% pahole [kernel] [k] free_hot_cold_page
| + 0.06% pahole /usr/lib64/libdw-0.141.so [.] tfind@plt
| 0.05% pahole ./build/libdwarves.so.1.0.0 [.] ftype__add_parameter
| @@ -242,2 +242,3 @@
| 0.01% pahole [kernel] [k] account_group_user_time
| + 0.01% pahole /usr/lib64/libdw-0.141.so [.] strlen@plt
| 0.01% pahole ./build/pahole [.] strcmp@plt
| [acme@doppio pahole]$
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <1247325517-12272-4-git-send-email-acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-07-11 23:18:36 +08:00
|
|
|
}
|
2010-07-30 20:50:09 +08:00
|
|
|
|
2012-08-11 06:23:02 +08:00
|
|
|
if (!runtime_ss && !syms_ss)
|
|
|
|
goto out_free;
|
|
|
|
|
|
|
|
if (runtime_ss && !syms_ss) {
|
|
|
|
syms_ss = runtime_ss;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* We'll have to hope for the best */
|
|
|
|
if (!runtime_ss && syms_ss)
|
|
|
|
runtime_ss = syms_ss;
|
|
|
|
|
2014-02-20 09:32:56 +08:00
|
|
|
if (syms_ss)
|
2016-09-02 06:25:52 +08:00
|
|
|
ret = dso__load_sym(dso, map, syms_ss, runtime_ss, kmod);
|
2014-02-20 09:32:56 +08:00
|
|
|
else
|
2012-08-11 06:23:02 +08:00
|
|
|
ret = -1;
|
|
|
|
|
2012-08-19 23:47:14 +08:00
|
|
|
if (ret > 0) {
|
2012-08-11 06:23:02 +08:00
|
|
|
int nr_plt;
|
|
|
|
|
2018-04-27 03:52:34 +08:00
|
|
|
nr_plt = dso__synthesize_plt_symbols(dso, runtime_ss);
|
2012-08-11 06:23:02 +08:00
|
|
|
if (nr_plt > 0)
|
|
|
|
ret += nr_plt;
|
2011-03-23 02:42:14 +08:00
|
|
|
}
|
|
|
|
|
2012-08-11 06:23:02 +08:00
|
|
|
for (; ss_pos > 0; ss_pos--)
|
|
|
|
symsrc__destroy(&ss_[ss_pos - 1]);
|
|
|
|
out_free:
|
2009-05-29 01:55:04 +08:00
|
|
|
free(name);
|
2011-03-31 21:56:28 +08:00
|
|
|
if (ret < 0 && strstr(dso->name, " (deleted)") != NULL)
|
2015-05-18 08:30:40 +08:00
|
|
|
ret = 0;
|
|
|
|
out:
|
2018-04-27 03:52:34 +08:00
|
|
|
dso__set_loaded(dso);
|
2022-08-27 00:42:35 +08:00
|
|
|
mutex_unlock(&dso->lock);
|
2017-07-06 09:48:08 +08:00
|
|
|
nsinfo__mountns_exit(&nsc);
|
2015-05-18 08:30:40 +08:00
|
|
|
|
2009-05-29 01:55:04 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
perf map_groups: Auto sort maps by name, if needed
There are still lots of lookups by name, even if just when loading
vmlinux, till that code is studied to figure out if its possible to do
away with those map lookup by names, provide a way to sort it using
libc's qsort/bsearch.
Doing it at the first lookup defers the sorting a bit, and as the code
stands now, is never done for user maps, just for the kernel ones.
# perf probe -l
# perf probe -x ~/bin/perf -L __map_groups__find_by_name
<__map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0>
0 static struct map *__map_groups__find_by_name(struct map_groups *mg, const char *name)
1 {
struct map **mapp;
4 if (mg->maps_by_name == NULL &&
5 map__groups__sort_by_name_from_rbtree(mg))
6 return NULL;
8 mapp = bsearch(name, mg->maps_by_name, mg->nr_maps, sizeof(*mapp), map__strcmp_name);
9 if (mapp)
10 return *mapp;
11 return NULL;
12 }
struct map *map_groups__find_by_name(struct map_groups *mg, const char *name)
{
# perf probe -x ~/bin/perf 'found=__map_groups__find_by_name:10 name:string'
Added new event:
probe_perf:found (on __map_groups__find_by_name:10 in /home/acme/bin/perf with name:string)
You can now use it in all perf tools, such as:
perf record -e probe_perf:found -aR sleep 1
#
# perf probe -x ~/bin/perf -L map_groups__find_by_name
<map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0>
0 struct map *map_groups__find_by_name(struct map_groups *mg, const char *name)
1 {
2 struct maps *maps = &mg->maps;
struct map *map;
5 down_read(&maps->lock);
7 if (mg->last_search_by_name && strcmp(mg->last_search_by_name->dso->short_name, name) == 0) {
8 map = mg->last_search_by_name;
9 goto out_unlock;
}
/*
* If we have mg->maps_by_name, then the name isn't in the rbtree,
* as mg->maps_by_name mirrors the rbtree when lookups by name are
* made.
*/
16 map = __map_groups__find_by_name(mg, name);
17 if (map || mg->maps_by_name != NULL)
18 goto out_unlock;
/* Fallback to traversing the rbtree... */
21 maps__for_each_entry(maps, map)
22 if (strcmp(map->dso->short_name, name) == 0) {
23 mg->last_search_by_name = map;
24 goto out_unlock;
}
27 map = NULL;
out_unlock:
30 up_read(&maps->lock);
31 return map;
32 }
int dso__load_vmlinux(struct dso *dso, struct map *map,
const char *vmlinux, bool vmlinux_allocated)
# perf probe -x ~/bin/perf 'fallback=map_groups__find_by_name:21 name:string'
Added new events:
probe_perf:fallback (on map_groups__find_by_name:21 in /home/acme/bin/perf with name:string)
probe_perf:fallback_1 (on map_groups__find_by_name:21 in /home/acme/bin/perf with name:string)
You can now use it in all perf tools, such as:
perf record -e probe_perf:fallback_1 -aR sleep 1
#
# perf probe -l
probe_perf:fallback (on map_groups__find_by_name:21@util/symbol.c in /home/acme/bin/perf with name_string)
probe_perf:fallback_1 (on map_groups__find_by_name:21@util/symbol.c in /home/acme/bin/perf with name_string)
probe_perf:found (on __map_groups__find_by_name:10@util/symbol.c in /home/acme/bin/perf with name_string)
#
# perf stat -e probe_perf:*
Now run 'perf top' in another term and then, after a while, stop 'perf stat':
Furthermore, if we ask for interval printing, we can see that that is done just
at the start of the workload:
# perf stat -I1000 -e probe_perf:*
# time counts unit events
1.000319513 0 probe_perf:found
1.000319513 0 probe_perf:fallback_1
1.000319513 0 probe_perf:fallback
2.001868092 23,251 probe_perf:found
2.001868092 0 probe_perf:fallback_1
2.001868092 0 probe_perf:fallback
3.002901597 0 probe_perf:found
3.002901597 0 probe_perf:fallback_1
3.002901597 0 probe_perf:fallback
4.003358591 0 probe_perf:found
4.003358591 0 probe_perf:fallback_1
4.003358591 0 probe_perf:fallback
^C
#
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: https://lkml.kernel.org/n/tip-c5lmbyr14x448rcfii7y6t3k@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-17 22:38:13 +08:00
|
|
|
static int map__strcmp(const void *a, const void *b)
|
|
|
|
{
|
|
|
|
const struct map *ma = *(const struct map **)a, *mb = *(const struct map **)b;
|
|
|
|
return strcmp(ma->dso->short_name, mb->dso->short_name);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int map__strcmp_name(const void *name, const void *b)
|
|
|
|
{
|
|
|
|
const struct map *map = *(const struct map **)b;
|
|
|
|
return strcmp(name, map->dso->short_name);
|
|
|
|
}
|
|
|
|
|
2019-11-26 09:21:28 +08:00
|
|
|
void __maps__sort_by_name(struct maps *maps)
|
perf map_groups: Auto sort maps by name, if needed
There are still lots of lookups by name, even if just when loading
vmlinux, till that code is studied to figure out if its possible to do
away with those map lookup by names, provide a way to sort it using
libc's qsort/bsearch.
Doing it at the first lookup defers the sorting a bit, and as the code
stands now, is never done for user maps, just for the kernel ones.
# perf probe -l
# perf probe -x ~/bin/perf -L __map_groups__find_by_name
<__map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0>
0 static struct map *__map_groups__find_by_name(struct map_groups *mg, const char *name)
1 {
struct map **mapp;
4 if (mg->maps_by_name == NULL &&
5 map__groups__sort_by_name_from_rbtree(mg))
6 return NULL;
8 mapp = bsearch(name, mg->maps_by_name, mg->nr_maps, sizeof(*mapp), map__strcmp_name);
9 if (mapp)
10 return *mapp;
11 return NULL;
12 }
struct map *map_groups__find_by_name(struct map_groups *mg, const char *name)
{
# perf probe -x ~/bin/perf 'found=__map_groups__find_by_name:10 name:string'
Added new event:
probe_perf:found (on __map_groups__find_by_name:10 in /home/acme/bin/perf with name:string)
You can now use it in all perf tools, such as:
perf record -e probe_perf:found -aR sleep 1
#
# perf probe -x ~/bin/perf -L map_groups__find_by_name
<map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0>
0 struct map *map_groups__find_by_name(struct map_groups *mg, const char *name)
1 {
2 struct maps *maps = &mg->maps;
struct map *map;
5 down_read(&maps->lock);
7 if (mg->last_search_by_name && strcmp(mg->last_search_by_name->dso->short_name, name) == 0) {
8 map = mg->last_search_by_name;
9 goto out_unlock;
}
/*
* If we have mg->maps_by_name, then the name isn't in the rbtree,
* as mg->maps_by_name mirrors the rbtree when lookups by name are
* made.
*/
16 map = __map_groups__find_by_name(mg, name);
17 if (map || mg->maps_by_name != NULL)
18 goto out_unlock;
/* Fallback to traversing the rbtree... */
21 maps__for_each_entry(maps, map)
22 if (strcmp(map->dso->short_name, name) == 0) {
23 mg->last_search_by_name = map;
24 goto out_unlock;
}
27 map = NULL;
out_unlock:
30 up_read(&maps->lock);
31 return map;
32 }
int dso__load_vmlinux(struct dso *dso, struct map *map,
const char *vmlinux, bool vmlinux_allocated)
# perf probe -x ~/bin/perf 'fallback=map_groups__find_by_name:21 name:string'
Added new events:
probe_perf:fallback (on map_groups__find_by_name:21 in /home/acme/bin/perf with name:string)
probe_perf:fallback_1 (on map_groups__find_by_name:21 in /home/acme/bin/perf with name:string)
You can now use it in all perf tools, such as:
perf record -e probe_perf:fallback_1 -aR sleep 1
#
# perf probe -l
probe_perf:fallback (on map_groups__find_by_name:21@util/symbol.c in /home/acme/bin/perf with name_string)
probe_perf:fallback_1 (on map_groups__find_by_name:21@util/symbol.c in /home/acme/bin/perf with name_string)
probe_perf:found (on __map_groups__find_by_name:10@util/symbol.c in /home/acme/bin/perf with name_string)
#
# perf stat -e probe_perf:*
Now run 'perf top' in another term and then, after a while, stop 'perf stat':
Furthermore, if we ask for interval printing, we can see that that is done just
at the start of the workload:
# perf stat -I1000 -e probe_perf:*
# time counts unit events
1.000319513 0 probe_perf:found
1.000319513 0 probe_perf:fallback_1
1.000319513 0 probe_perf:fallback
2.001868092 23,251 probe_perf:found
2.001868092 0 probe_perf:fallback_1
2.001868092 0 probe_perf:fallback
3.002901597 0 probe_perf:found
3.002901597 0 probe_perf:fallback_1
3.002901597 0 probe_perf:fallback
4.003358591 0 probe_perf:found
4.003358591 0 probe_perf:fallback_1
4.003358591 0 probe_perf:fallback
^C
#
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: https://lkml.kernel.org/n/tip-c5lmbyr14x448rcfii7y6t3k@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-17 22:38:13 +08:00
|
|
|
{
|
2019-11-26 09:21:28 +08:00
|
|
|
qsort(maps->maps_by_name, maps->nr_maps, sizeof(struct map *), map__strcmp);
|
perf map_groups: Auto sort maps by name, if needed
There are still lots of lookups by name, even if just when loading
vmlinux, till that code is studied to figure out if its possible to do
away with those map lookup by names, provide a way to sort it using
libc's qsort/bsearch.
Doing it at the first lookup defers the sorting a bit, and as the code
stands now, is never done for user maps, just for the kernel ones.
# perf probe -l
# perf probe -x ~/bin/perf -L __map_groups__find_by_name
<__map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0>
0 static struct map *__map_groups__find_by_name(struct map_groups *mg, const char *name)
1 {
struct map **mapp;
4 if (mg->maps_by_name == NULL &&
5 map__groups__sort_by_name_from_rbtree(mg))
6 return NULL;
8 mapp = bsearch(name, mg->maps_by_name, mg->nr_maps, sizeof(*mapp), map__strcmp_name);
9 if (mapp)
10 return *mapp;
11 return NULL;
12 }
struct map *map_groups__find_by_name(struct map_groups *mg, const char *name)
{
# perf probe -x ~/bin/perf 'found=__map_groups__find_by_name:10 name:string'
Added new event:
probe_perf:found (on __map_groups__find_by_name:10 in /home/acme/bin/perf with name:string)
You can now use it in all perf tools, such as:
perf record -e probe_perf:found -aR sleep 1
#
# perf probe -x ~/bin/perf -L map_groups__find_by_name
<map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0>
0 struct map *map_groups__find_by_name(struct map_groups *mg, const char *name)
1 {
2 struct maps *maps = &mg->maps;
struct map *map;
5 down_read(&maps->lock);
7 if (mg->last_search_by_name && strcmp(mg->last_search_by_name->dso->short_name, name) == 0) {
8 map = mg->last_search_by_name;
9 goto out_unlock;
}
/*
* If we have mg->maps_by_name, then the name isn't in the rbtree,
* as mg->maps_by_name mirrors the rbtree when lookups by name are
* made.
*/
16 map = __map_groups__find_by_name(mg, name);
17 if (map || mg->maps_by_name != NULL)
18 goto out_unlock;
/* Fallback to traversing the rbtree... */
21 maps__for_each_entry(maps, map)
22 if (strcmp(map->dso->short_name, name) == 0) {
23 mg->last_search_by_name = map;
24 goto out_unlock;
}
27 map = NULL;
out_unlock:
30 up_read(&maps->lock);
31 return map;
32 }
int dso__load_vmlinux(struct dso *dso, struct map *map,
const char *vmlinux, bool vmlinux_allocated)
# perf probe -x ~/bin/perf 'fallback=map_groups__find_by_name:21 name:string'
Added new events:
probe_perf:fallback (on map_groups__find_by_name:21 in /home/acme/bin/perf with name:string)
probe_perf:fallback_1 (on map_groups__find_by_name:21 in /home/acme/bin/perf with name:string)
You can now use it in all perf tools, such as:
perf record -e probe_perf:fallback_1 -aR sleep 1
#
# perf probe -l
probe_perf:fallback (on map_groups__find_by_name:21@util/symbol.c in /home/acme/bin/perf with name_string)
probe_perf:fallback_1 (on map_groups__find_by_name:21@util/symbol.c in /home/acme/bin/perf with name_string)
probe_perf:found (on __map_groups__find_by_name:10@util/symbol.c in /home/acme/bin/perf with name_string)
#
# perf stat -e probe_perf:*
Now run 'perf top' in another term and then, after a while, stop 'perf stat':
Furthermore, if we ask for interval printing, we can see that that is done just
at the start of the workload:
# perf stat -I1000 -e probe_perf:*
# time counts unit events
1.000319513 0 probe_perf:found
1.000319513 0 probe_perf:fallback_1
1.000319513 0 probe_perf:fallback
2.001868092 23,251 probe_perf:found
2.001868092 0 probe_perf:fallback_1
2.001868092 0 probe_perf:fallback
3.002901597 0 probe_perf:found
3.002901597 0 probe_perf:fallback_1
3.002901597 0 probe_perf:fallback
4.003358591 0 probe_perf:found
4.003358591 0 probe_perf:fallback_1
4.003358591 0 probe_perf:fallback
^C
#
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: https://lkml.kernel.org/n/tip-c5lmbyr14x448rcfii7y6t3k@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-17 22:38:13 +08:00
|
|
|
}
|
|
|
|
|
2019-11-26 09:21:28 +08:00
|
|
|
static int map__groups__sort_by_name_from_rbtree(struct maps *maps)
|
perf map_groups: Auto sort maps by name, if needed
There are still lots of lookups by name, even if just when loading
vmlinux, till that code is studied to figure out if its possible to do
away with those map lookup by names, provide a way to sort it using
libc's qsort/bsearch.
Doing it at the first lookup defers the sorting a bit, and as the code
stands now, is never done for user maps, just for the kernel ones.
# perf probe -l
# perf probe -x ~/bin/perf -L __map_groups__find_by_name
<__map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0>
0 static struct map *__map_groups__find_by_name(struct map_groups *mg, const char *name)
1 {
struct map **mapp;
4 if (mg->maps_by_name == NULL &&
5 map__groups__sort_by_name_from_rbtree(mg))
6 return NULL;
8 mapp = bsearch(name, mg->maps_by_name, mg->nr_maps, sizeof(*mapp), map__strcmp_name);
9 if (mapp)
10 return *mapp;
11 return NULL;
12 }
struct map *map_groups__find_by_name(struct map_groups *mg, const char *name)
{
# perf probe -x ~/bin/perf 'found=__map_groups__find_by_name:10 name:string'
Added new event:
probe_perf:found (on __map_groups__find_by_name:10 in /home/acme/bin/perf with name:string)
You can now use it in all perf tools, such as:
perf record -e probe_perf:found -aR sleep 1
#
# perf probe -x ~/bin/perf -L map_groups__find_by_name
<map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0>
0 struct map *map_groups__find_by_name(struct map_groups *mg, const char *name)
1 {
2 struct maps *maps = &mg->maps;
struct map *map;
5 down_read(&maps->lock);
7 if (mg->last_search_by_name && strcmp(mg->last_search_by_name->dso->short_name, name) == 0) {
8 map = mg->last_search_by_name;
9 goto out_unlock;
}
/*
* If we have mg->maps_by_name, then the name isn't in the rbtree,
* as mg->maps_by_name mirrors the rbtree when lookups by name are
* made.
*/
16 map = __map_groups__find_by_name(mg, name);
17 if (map || mg->maps_by_name != NULL)
18 goto out_unlock;
/* Fallback to traversing the rbtree... */
21 maps__for_each_entry(maps, map)
22 if (strcmp(map->dso->short_name, name) == 0) {
23 mg->last_search_by_name = map;
24 goto out_unlock;
}
27 map = NULL;
out_unlock:
30 up_read(&maps->lock);
31 return map;
32 }
int dso__load_vmlinux(struct dso *dso, struct map *map,
const char *vmlinux, bool vmlinux_allocated)
# perf probe -x ~/bin/perf 'fallback=map_groups__find_by_name:21 name:string'
Added new events:
probe_perf:fallback (on map_groups__find_by_name:21 in /home/acme/bin/perf with name:string)
probe_perf:fallback_1 (on map_groups__find_by_name:21 in /home/acme/bin/perf with name:string)
You can now use it in all perf tools, such as:
perf record -e probe_perf:fallback_1 -aR sleep 1
#
# perf probe -l
probe_perf:fallback (on map_groups__find_by_name:21@util/symbol.c in /home/acme/bin/perf with name_string)
probe_perf:fallback_1 (on map_groups__find_by_name:21@util/symbol.c in /home/acme/bin/perf with name_string)
probe_perf:found (on __map_groups__find_by_name:10@util/symbol.c in /home/acme/bin/perf with name_string)
#
# perf stat -e probe_perf:*
Now run 'perf top' in another term and then, after a while, stop 'perf stat':
Furthermore, if we ask for interval printing, we can see that that is done just
at the start of the workload:
# perf stat -I1000 -e probe_perf:*
# time counts unit events
1.000319513 0 probe_perf:found
1.000319513 0 probe_perf:fallback_1
1.000319513 0 probe_perf:fallback
2.001868092 23,251 probe_perf:found
2.001868092 0 probe_perf:fallback_1
2.001868092 0 probe_perf:fallback
3.002901597 0 probe_perf:found
3.002901597 0 probe_perf:fallback_1
3.002901597 0 probe_perf:fallback
4.003358591 0 probe_perf:found
4.003358591 0 probe_perf:fallback_1
4.003358591 0 probe_perf:fallback
^C
#
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: https://lkml.kernel.org/n/tip-c5lmbyr14x448rcfii7y6t3k@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-17 22:38:13 +08:00
|
|
|
{
|
|
|
|
struct map *map;
|
2019-11-26 09:21:28 +08:00
|
|
|
struct map **maps_by_name = realloc(maps->maps_by_name, maps->nr_maps * sizeof(map));
|
perf map_groups: Auto sort maps by name, if needed
There are still lots of lookups by name, even if just when loading
vmlinux, till that code is studied to figure out if its possible to do
away with those map lookup by names, provide a way to sort it using
libc's qsort/bsearch.
Doing it at the first lookup defers the sorting a bit, and as the code
stands now, is never done for user maps, just for the kernel ones.
# perf probe -l
# perf probe -x ~/bin/perf -L __map_groups__find_by_name
<__map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0>
0 static struct map *__map_groups__find_by_name(struct map_groups *mg, const char *name)
1 {
struct map **mapp;
4 if (mg->maps_by_name == NULL &&
5 map__groups__sort_by_name_from_rbtree(mg))
6 return NULL;
8 mapp = bsearch(name, mg->maps_by_name, mg->nr_maps, sizeof(*mapp), map__strcmp_name);
9 if (mapp)
10 return *mapp;
11 return NULL;
12 }
struct map *map_groups__find_by_name(struct map_groups *mg, const char *name)
{
# perf probe -x ~/bin/perf 'found=__map_groups__find_by_name:10 name:string'
Added new event:
probe_perf:found (on __map_groups__find_by_name:10 in /home/acme/bin/perf with name:string)
You can now use it in all perf tools, such as:
perf record -e probe_perf:found -aR sleep 1
#
# perf probe -x ~/bin/perf -L map_groups__find_by_name
<map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0>
0 struct map *map_groups__find_by_name(struct map_groups *mg, const char *name)
1 {
2 struct maps *maps = &mg->maps;
struct map *map;
5 down_read(&maps->lock);
7 if (mg->last_search_by_name && strcmp(mg->last_search_by_name->dso->short_name, name) == 0) {
8 map = mg->last_search_by_name;
9 goto out_unlock;
}
/*
* If we have mg->maps_by_name, then the name isn't in the rbtree,
* as mg->maps_by_name mirrors the rbtree when lookups by name are
* made.
*/
16 map = __map_groups__find_by_name(mg, name);
17 if (map || mg->maps_by_name != NULL)
18 goto out_unlock;
/* Fallback to traversing the rbtree... */
21 maps__for_each_entry(maps, map)
22 if (strcmp(map->dso->short_name, name) == 0) {
23 mg->last_search_by_name = map;
24 goto out_unlock;
}
27 map = NULL;
out_unlock:
30 up_read(&maps->lock);
31 return map;
32 }
int dso__load_vmlinux(struct dso *dso, struct map *map,
const char *vmlinux, bool vmlinux_allocated)
# perf probe -x ~/bin/perf 'fallback=map_groups__find_by_name:21 name:string'
Added new events:
probe_perf:fallback (on map_groups__find_by_name:21 in /home/acme/bin/perf with name:string)
probe_perf:fallback_1 (on map_groups__find_by_name:21 in /home/acme/bin/perf with name:string)
You can now use it in all perf tools, such as:
perf record -e probe_perf:fallback_1 -aR sleep 1
#
# perf probe -l
probe_perf:fallback (on map_groups__find_by_name:21@util/symbol.c in /home/acme/bin/perf with name_string)
probe_perf:fallback_1 (on map_groups__find_by_name:21@util/symbol.c in /home/acme/bin/perf with name_string)
probe_perf:found (on __map_groups__find_by_name:10@util/symbol.c in /home/acme/bin/perf with name_string)
#
# perf stat -e probe_perf:*
Now run 'perf top' in another term and then, after a while, stop 'perf stat':
Furthermore, if we ask for interval printing, we can see that that is done just
at the start of the workload:
# perf stat -I1000 -e probe_perf:*
# time counts unit events
1.000319513 0 probe_perf:found
1.000319513 0 probe_perf:fallback_1
1.000319513 0 probe_perf:fallback
2.001868092 23,251 probe_perf:found
2.001868092 0 probe_perf:fallback_1
2.001868092 0 probe_perf:fallback
3.002901597 0 probe_perf:found
3.002901597 0 probe_perf:fallback_1
3.002901597 0 probe_perf:fallback
4.003358591 0 probe_perf:found
4.003358591 0 probe_perf:fallback_1
4.003358591 0 probe_perf:fallback
^C
#
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: https://lkml.kernel.org/n/tip-c5lmbyr14x448rcfii7y6t3k@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-17 22:38:13 +08:00
|
|
|
int i = 0;
|
|
|
|
|
|
|
|
if (maps_by_name == NULL)
|
|
|
|
return -1;
|
|
|
|
|
2023-03-20 11:37:53 +08:00
|
|
|
up_read(&maps->lock);
|
|
|
|
down_write(&maps->lock);
|
|
|
|
|
2019-11-26 09:21:28 +08:00
|
|
|
maps->maps_by_name = maps_by_name;
|
|
|
|
maps->nr_maps_allocated = maps->nr_maps;
|
perf map_groups: Auto sort maps by name, if needed
There are still lots of lookups by name, even if just when loading
vmlinux, till that code is studied to figure out if its possible to do
away with those map lookup by names, provide a way to sort it using
libc's qsort/bsearch.
Doing it at the first lookup defers the sorting a bit, and as the code
stands now, is never done for user maps, just for the kernel ones.
# perf probe -l
# perf probe -x ~/bin/perf -L __map_groups__find_by_name
<__map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0>
0 static struct map *__map_groups__find_by_name(struct map_groups *mg, const char *name)
1 {
struct map **mapp;
4 if (mg->maps_by_name == NULL &&
5 map__groups__sort_by_name_from_rbtree(mg))
6 return NULL;
8 mapp = bsearch(name, mg->maps_by_name, mg->nr_maps, sizeof(*mapp), map__strcmp_name);
9 if (mapp)
10 return *mapp;
11 return NULL;
12 }
struct map *map_groups__find_by_name(struct map_groups *mg, const char *name)
{
# perf probe -x ~/bin/perf 'found=__map_groups__find_by_name:10 name:string'
Added new event:
probe_perf:found (on __map_groups__find_by_name:10 in /home/acme/bin/perf with name:string)
You can now use it in all perf tools, such as:
perf record -e probe_perf:found -aR sleep 1
#
# perf probe -x ~/bin/perf -L map_groups__find_by_name
<map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0>
0 struct map *map_groups__find_by_name(struct map_groups *mg, const char *name)
1 {
2 struct maps *maps = &mg->maps;
struct map *map;
5 down_read(&maps->lock);
7 if (mg->last_search_by_name && strcmp(mg->last_search_by_name->dso->short_name, name) == 0) {
8 map = mg->last_search_by_name;
9 goto out_unlock;
}
/*
* If we have mg->maps_by_name, then the name isn't in the rbtree,
* as mg->maps_by_name mirrors the rbtree when lookups by name are
* made.
*/
16 map = __map_groups__find_by_name(mg, name);
17 if (map || mg->maps_by_name != NULL)
18 goto out_unlock;
/* Fallback to traversing the rbtree... */
21 maps__for_each_entry(maps, map)
22 if (strcmp(map->dso->short_name, name) == 0) {
23 mg->last_search_by_name = map;
24 goto out_unlock;
}
27 map = NULL;
out_unlock:
30 up_read(&maps->lock);
31 return map;
32 }
int dso__load_vmlinux(struct dso *dso, struct map *map,
const char *vmlinux, bool vmlinux_allocated)
# perf probe -x ~/bin/perf 'fallback=map_groups__find_by_name:21 name:string'
Added new events:
probe_perf:fallback (on map_groups__find_by_name:21 in /home/acme/bin/perf with name:string)
probe_perf:fallback_1 (on map_groups__find_by_name:21 in /home/acme/bin/perf with name:string)
You can now use it in all perf tools, such as:
perf record -e probe_perf:fallback_1 -aR sleep 1
#
# perf probe -l
probe_perf:fallback (on map_groups__find_by_name:21@util/symbol.c in /home/acme/bin/perf with name_string)
probe_perf:fallback_1 (on map_groups__find_by_name:21@util/symbol.c in /home/acme/bin/perf with name_string)
probe_perf:found (on __map_groups__find_by_name:10@util/symbol.c in /home/acme/bin/perf with name_string)
#
# perf stat -e probe_perf:*
Now run 'perf top' in another term and then, after a while, stop 'perf stat':
Furthermore, if we ask for interval printing, we can see that that is done just
at the start of the workload:
# perf stat -I1000 -e probe_perf:*
# time counts unit events
1.000319513 0 probe_perf:found
1.000319513 0 probe_perf:fallback_1
1.000319513 0 probe_perf:fallback
2.001868092 23,251 probe_perf:found
2.001868092 0 probe_perf:fallback_1
2.001868092 0 probe_perf:fallback
3.002901597 0 probe_perf:found
3.002901597 0 probe_perf:fallback_1
3.002901597 0 probe_perf:fallback
4.003358591 0 probe_perf:found
4.003358591 0 probe_perf:fallback_1
4.003358591 0 probe_perf:fallback
^C
#
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: https://lkml.kernel.org/n/tip-c5lmbyr14x448rcfii7y6t3k@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-17 22:38:13 +08:00
|
|
|
|
2019-11-26 09:21:28 +08:00
|
|
|
maps__for_each_entry(maps, map)
|
perf map_groups: Auto sort maps by name, if needed
There are still lots of lookups by name, even if just when loading
vmlinux, till that code is studied to figure out if its possible to do
away with those map lookup by names, provide a way to sort it using
libc's qsort/bsearch.
Doing it at the first lookup defers the sorting a bit, and as the code
stands now, is never done for user maps, just for the kernel ones.
# perf probe -l
# perf probe -x ~/bin/perf -L __map_groups__find_by_name
<__map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0>
0 static struct map *__map_groups__find_by_name(struct map_groups *mg, const char *name)
1 {
struct map **mapp;
4 if (mg->maps_by_name == NULL &&
5 map__groups__sort_by_name_from_rbtree(mg))
6 return NULL;
8 mapp = bsearch(name, mg->maps_by_name, mg->nr_maps, sizeof(*mapp), map__strcmp_name);
9 if (mapp)
10 return *mapp;
11 return NULL;
12 }
struct map *map_groups__find_by_name(struct map_groups *mg, const char *name)
{
# perf probe -x ~/bin/perf 'found=__map_groups__find_by_name:10 name:string'
Added new event:
probe_perf:found (on __map_groups__find_by_name:10 in /home/acme/bin/perf with name:string)
You can now use it in all perf tools, such as:
perf record -e probe_perf:found -aR sleep 1
#
# perf probe -x ~/bin/perf -L map_groups__find_by_name
<map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0>
0 struct map *map_groups__find_by_name(struct map_groups *mg, const char *name)
1 {
2 struct maps *maps = &mg->maps;
struct map *map;
5 down_read(&maps->lock);
7 if (mg->last_search_by_name && strcmp(mg->last_search_by_name->dso->short_name, name) == 0) {
8 map = mg->last_search_by_name;
9 goto out_unlock;
}
/*
* If we have mg->maps_by_name, then the name isn't in the rbtree,
* as mg->maps_by_name mirrors the rbtree when lookups by name are
* made.
*/
16 map = __map_groups__find_by_name(mg, name);
17 if (map || mg->maps_by_name != NULL)
18 goto out_unlock;
/* Fallback to traversing the rbtree... */
21 maps__for_each_entry(maps, map)
22 if (strcmp(map->dso->short_name, name) == 0) {
23 mg->last_search_by_name = map;
24 goto out_unlock;
}
27 map = NULL;
out_unlock:
30 up_read(&maps->lock);
31 return map;
32 }
int dso__load_vmlinux(struct dso *dso, struct map *map,
const char *vmlinux, bool vmlinux_allocated)
# perf probe -x ~/bin/perf 'fallback=map_groups__find_by_name:21 name:string'
Added new events:
probe_perf:fallback (on map_groups__find_by_name:21 in /home/acme/bin/perf with name:string)
probe_perf:fallback_1 (on map_groups__find_by_name:21 in /home/acme/bin/perf with name:string)
You can now use it in all perf tools, such as:
perf record -e probe_perf:fallback_1 -aR sleep 1
#
# perf probe -l
probe_perf:fallback (on map_groups__find_by_name:21@util/symbol.c in /home/acme/bin/perf with name_string)
probe_perf:fallback_1 (on map_groups__find_by_name:21@util/symbol.c in /home/acme/bin/perf with name_string)
probe_perf:found (on __map_groups__find_by_name:10@util/symbol.c in /home/acme/bin/perf with name_string)
#
# perf stat -e probe_perf:*
Now run 'perf top' in another term and then, after a while, stop 'perf stat':
Furthermore, if we ask for interval printing, we can see that that is done just
at the start of the workload:
# perf stat -I1000 -e probe_perf:*
# time counts unit events
1.000319513 0 probe_perf:found
1.000319513 0 probe_perf:fallback_1
1.000319513 0 probe_perf:fallback
2.001868092 23,251 probe_perf:found
2.001868092 0 probe_perf:fallback_1
2.001868092 0 probe_perf:fallback
3.002901597 0 probe_perf:found
3.002901597 0 probe_perf:fallback_1
3.002901597 0 probe_perf:fallback
4.003358591 0 probe_perf:found
4.003358591 0 probe_perf:fallback_1
4.003358591 0 probe_perf:fallback
^C
#
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: https://lkml.kernel.org/n/tip-c5lmbyr14x448rcfii7y6t3k@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-17 22:38:13 +08:00
|
|
|
maps_by_name[i++] = map;
|
|
|
|
|
2019-11-26 09:21:28 +08:00
|
|
|
__maps__sort_by_name(maps);
|
2023-03-20 11:37:53 +08:00
|
|
|
|
|
|
|
up_write(&maps->lock);
|
|
|
|
down_read(&maps->lock);
|
|
|
|
|
perf map_groups: Auto sort maps by name, if needed
There are still lots of lookups by name, even if just when loading
vmlinux, till that code is studied to figure out if its possible to do
away with those map lookup by names, provide a way to sort it using
libc's qsort/bsearch.
Doing it at the first lookup defers the sorting a bit, and as the code
stands now, is never done for user maps, just for the kernel ones.
# perf probe -l
# perf probe -x ~/bin/perf -L __map_groups__find_by_name
<__map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0>
0 static struct map *__map_groups__find_by_name(struct map_groups *mg, const char *name)
1 {
struct map **mapp;
4 if (mg->maps_by_name == NULL &&
5 map__groups__sort_by_name_from_rbtree(mg))
6 return NULL;
8 mapp = bsearch(name, mg->maps_by_name, mg->nr_maps, sizeof(*mapp), map__strcmp_name);
9 if (mapp)
10 return *mapp;
11 return NULL;
12 }
struct map *map_groups__find_by_name(struct map_groups *mg, const char *name)
{
# perf probe -x ~/bin/perf 'found=__map_groups__find_by_name:10 name:string'
Added new event:
probe_perf:found (on __map_groups__find_by_name:10 in /home/acme/bin/perf with name:string)
You can now use it in all perf tools, such as:
perf record -e probe_perf:found -aR sleep 1
#
# perf probe -x ~/bin/perf -L map_groups__find_by_name
<map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0>
0 struct map *map_groups__find_by_name(struct map_groups *mg, const char *name)
1 {
2 struct maps *maps = &mg->maps;
struct map *map;
5 down_read(&maps->lock);
7 if (mg->last_search_by_name && strcmp(mg->last_search_by_name->dso->short_name, name) == 0) {
8 map = mg->last_search_by_name;
9 goto out_unlock;
}
/*
* If we have mg->maps_by_name, then the name isn't in the rbtree,
* as mg->maps_by_name mirrors the rbtree when lookups by name are
* made.
*/
16 map = __map_groups__find_by_name(mg, name);
17 if (map || mg->maps_by_name != NULL)
18 goto out_unlock;
/* Fallback to traversing the rbtree... */
21 maps__for_each_entry(maps, map)
22 if (strcmp(map->dso->short_name, name) == 0) {
23 mg->last_search_by_name = map;
24 goto out_unlock;
}
27 map = NULL;
out_unlock:
30 up_read(&maps->lock);
31 return map;
32 }
int dso__load_vmlinux(struct dso *dso, struct map *map,
const char *vmlinux, bool vmlinux_allocated)
# perf probe -x ~/bin/perf 'fallback=map_groups__find_by_name:21 name:string'
Added new events:
probe_perf:fallback (on map_groups__find_by_name:21 in /home/acme/bin/perf with name:string)
probe_perf:fallback_1 (on map_groups__find_by_name:21 in /home/acme/bin/perf with name:string)
You can now use it in all perf tools, such as:
perf record -e probe_perf:fallback_1 -aR sleep 1
#
# perf probe -l
probe_perf:fallback (on map_groups__find_by_name:21@util/symbol.c in /home/acme/bin/perf with name_string)
probe_perf:fallback_1 (on map_groups__find_by_name:21@util/symbol.c in /home/acme/bin/perf with name_string)
probe_perf:found (on __map_groups__find_by_name:10@util/symbol.c in /home/acme/bin/perf with name_string)
#
# perf stat -e probe_perf:*
Now run 'perf top' in another term and then, after a while, stop 'perf stat':
Furthermore, if we ask for interval printing, we can see that that is done just
at the start of the workload:
# perf stat -I1000 -e probe_perf:*
# time counts unit events
1.000319513 0 probe_perf:found
1.000319513 0 probe_perf:fallback_1
1.000319513 0 probe_perf:fallback
2.001868092 23,251 probe_perf:found
2.001868092 0 probe_perf:fallback_1
2.001868092 0 probe_perf:fallback
3.002901597 0 probe_perf:found
3.002901597 0 probe_perf:fallback_1
3.002901597 0 probe_perf:fallback
4.003358591 0 probe_perf:found
4.003358591 0 probe_perf:fallback_1
4.003358591 0 probe_perf:fallback
^C
#
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: https://lkml.kernel.org/n/tip-c5lmbyr14x448rcfii7y6t3k@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-17 22:38:13 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2019-11-26 09:21:28 +08:00
|
|
|
static struct map *__maps__find_by_name(struct maps *maps, const char *name)
|
perf map_groups: Auto sort maps by name, if needed
There are still lots of lookups by name, even if just when loading
vmlinux, till that code is studied to figure out if its possible to do
away with those map lookup by names, provide a way to sort it using
libc's qsort/bsearch.
Doing it at the first lookup defers the sorting a bit, and as the code
stands now, is never done for user maps, just for the kernel ones.
# perf probe -l
# perf probe -x ~/bin/perf -L __map_groups__find_by_name
<__map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0>
0 static struct map *__map_groups__find_by_name(struct map_groups *mg, const char *name)
1 {
struct map **mapp;
4 if (mg->maps_by_name == NULL &&
5 map__groups__sort_by_name_from_rbtree(mg))
6 return NULL;
8 mapp = bsearch(name, mg->maps_by_name, mg->nr_maps, sizeof(*mapp), map__strcmp_name);
9 if (mapp)
10 return *mapp;
11 return NULL;
12 }
struct map *map_groups__find_by_name(struct map_groups *mg, const char *name)
{
# perf probe -x ~/bin/perf 'found=__map_groups__find_by_name:10 name:string'
Added new event:
probe_perf:found (on __map_groups__find_by_name:10 in /home/acme/bin/perf with name:string)
You can now use it in all perf tools, such as:
perf record -e probe_perf:found -aR sleep 1
#
# perf probe -x ~/bin/perf -L map_groups__find_by_name
<map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0>
0 struct map *map_groups__find_by_name(struct map_groups *mg, const char *name)
1 {
2 struct maps *maps = &mg->maps;
struct map *map;
5 down_read(&maps->lock);
7 if (mg->last_search_by_name && strcmp(mg->last_search_by_name->dso->short_name, name) == 0) {
8 map = mg->last_search_by_name;
9 goto out_unlock;
}
/*
* If we have mg->maps_by_name, then the name isn't in the rbtree,
* as mg->maps_by_name mirrors the rbtree when lookups by name are
* made.
*/
16 map = __map_groups__find_by_name(mg, name);
17 if (map || mg->maps_by_name != NULL)
18 goto out_unlock;
/* Fallback to traversing the rbtree... */
21 maps__for_each_entry(maps, map)
22 if (strcmp(map->dso->short_name, name) == 0) {
23 mg->last_search_by_name = map;
24 goto out_unlock;
}
27 map = NULL;
out_unlock:
30 up_read(&maps->lock);
31 return map;
32 }
int dso__load_vmlinux(struct dso *dso, struct map *map,
const char *vmlinux, bool vmlinux_allocated)
# perf probe -x ~/bin/perf 'fallback=map_groups__find_by_name:21 name:string'
Added new events:
probe_perf:fallback (on map_groups__find_by_name:21 in /home/acme/bin/perf with name:string)
probe_perf:fallback_1 (on map_groups__find_by_name:21 in /home/acme/bin/perf with name:string)
You can now use it in all perf tools, such as:
perf record -e probe_perf:fallback_1 -aR sleep 1
#
# perf probe -l
probe_perf:fallback (on map_groups__find_by_name:21@util/symbol.c in /home/acme/bin/perf with name_string)
probe_perf:fallback_1 (on map_groups__find_by_name:21@util/symbol.c in /home/acme/bin/perf with name_string)
probe_perf:found (on __map_groups__find_by_name:10@util/symbol.c in /home/acme/bin/perf with name_string)
#
# perf stat -e probe_perf:*
Now run 'perf top' in another term and then, after a while, stop 'perf stat':
Furthermore, if we ask for interval printing, we can see that that is done just
at the start of the workload:
# perf stat -I1000 -e probe_perf:*
# time counts unit events
1.000319513 0 probe_perf:found
1.000319513 0 probe_perf:fallback_1
1.000319513 0 probe_perf:fallback
2.001868092 23,251 probe_perf:found
2.001868092 0 probe_perf:fallback_1
2.001868092 0 probe_perf:fallback
3.002901597 0 probe_perf:found
3.002901597 0 probe_perf:fallback_1
3.002901597 0 probe_perf:fallback
4.003358591 0 probe_perf:found
4.003358591 0 probe_perf:fallback_1
4.003358591 0 probe_perf:fallback
^C
#
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: https://lkml.kernel.org/n/tip-c5lmbyr14x448rcfii7y6t3k@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-17 22:38:13 +08:00
|
|
|
{
|
|
|
|
struct map **mapp;
|
|
|
|
|
2019-11-26 09:21:28 +08:00
|
|
|
if (maps->maps_by_name == NULL &&
|
|
|
|
map__groups__sort_by_name_from_rbtree(maps))
|
perf map_groups: Auto sort maps by name, if needed
There are still lots of lookups by name, even if just when loading
vmlinux, till that code is studied to figure out if its possible to do
away with those map lookup by names, provide a way to sort it using
libc's qsort/bsearch.
Doing it at the first lookup defers the sorting a bit, and as the code
stands now, is never done for user maps, just for the kernel ones.
# perf probe -l
# perf probe -x ~/bin/perf -L __map_groups__find_by_name
<__map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0>
0 static struct map *__map_groups__find_by_name(struct map_groups *mg, const char *name)
1 {
struct map **mapp;
4 if (mg->maps_by_name == NULL &&
5 map__groups__sort_by_name_from_rbtree(mg))
6 return NULL;
8 mapp = bsearch(name, mg->maps_by_name, mg->nr_maps, sizeof(*mapp), map__strcmp_name);
9 if (mapp)
10 return *mapp;
11 return NULL;
12 }
struct map *map_groups__find_by_name(struct map_groups *mg, const char *name)
{
# perf probe -x ~/bin/perf 'found=__map_groups__find_by_name:10 name:string'
Added new event:
probe_perf:found (on __map_groups__find_by_name:10 in /home/acme/bin/perf with name:string)
You can now use it in all perf tools, such as:
perf record -e probe_perf:found -aR sleep 1
#
# perf probe -x ~/bin/perf -L map_groups__find_by_name
<map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0>
0 struct map *map_groups__find_by_name(struct map_groups *mg, const char *name)
1 {
2 struct maps *maps = &mg->maps;
struct map *map;
5 down_read(&maps->lock);
7 if (mg->last_search_by_name && strcmp(mg->last_search_by_name->dso->short_name, name) == 0) {
8 map = mg->last_search_by_name;
9 goto out_unlock;
}
/*
* If we have mg->maps_by_name, then the name isn't in the rbtree,
* as mg->maps_by_name mirrors the rbtree when lookups by name are
* made.
*/
16 map = __map_groups__find_by_name(mg, name);
17 if (map || mg->maps_by_name != NULL)
18 goto out_unlock;
/* Fallback to traversing the rbtree... */
21 maps__for_each_entry(maps, map)
22 if (strcmp(map->dso->short_name, name) == 0) {
23 mg->last_search_by_name = map;
24 goto out_unlock;
}
27 map = NULL;
out_unlock:
30 up_read(&maps->lock);
31 return map;
32 }
int dso__load_vmlinux(struct dso *dso, struct map *map,
const char *vmlinux, bool vmlinux_allocated)
# perf probe -x ~/bin/perf 'fallback=map_groups__find_by_name:21 name:string'
Added new events:
probe_perf:fallback (on map_groups__find_by_name:21 in /home/acme/bin/perf with name:string)
probe_perf:fallback_1 (on map_groups__find_by_name:21 in /home/acme/bin/perf with name:string)
You can now use it in all perf tools, such as:
perf record -e probe_perf:fallback_1 -aR sleep 1
#
# perf probe -l
probe_perf:fallback (on map_groups__find_by_name:21@util/symbol.c in /home/acme/bin/perf with name_string)
probe_perf:fallback_1 (on map_groups__find_by_name:21@util/symbol.c in /home/acme/bin/perf with name_string)
probe_perf:found (on __map_groups__find_by_name:10@util/symbol.c in /home/acme/bin/perf with name_string)
#
# perf stat -e probe_perf:*
Now run 'perf top' in another term and then, after a while, stop 'perf stat':
Furthermore, if we ask for interval printing, we can see that that is done just
at the start of the workload:
# perf stat -I1000 -e probe_perf:*
# time counts unit events
1.000319513 0 probe_perf:found
1.000319513 0 probe_perf:fallback_1
1.000319513 0 probe_perf:fallback
2.001868092 23,251 probe_perf:found
2.001868092 0 probe_perf:fallback_1
2.001868092 0 probe_perf:fallback
3.002901597 0 probe_perf:found
3.002901597 0 probe_perf:fallback_1
3.002901597 0 probe_perf:fallback
4.003358591 0 probe_perf:found
4.003358591 0 probe_perf:fallback_1
4.003358591 0 probe_perf:fallback
^C
#
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: https://lkml.kernel.org/n/tip-c5lmbyr14x448rcfii7y6t3k@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-17 22:38:13 +08:00
|
|
|
return NULL;
|
|
|
|
|
2019-11-26 09:21:28 +08:00
|
|
|
mapp = bsearch(name, maps->maps_by_name, maps->nr_maps, sizeof(*mapp), map__strcmp_name);
|
perf map_groups: Auto sort maps by name, if needed
There are still lots of lookups by name, even if just when loading
vmlinux, till that code is studied to figure out if its possible to do
away with those map lookup by names, provide a way to sort it using
libc's qsort/bsearch.
Doing it at the first lookup defers the sorting a bit, and as the code
stands now, is never done for user maps, just for the kernel ones.
# perf probe -l
# perf probe -x ~/bin/perf -L __map_groups__find_by_name
<__map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0>
0 static struct map *__map_groups__find_by_name(struct map_groups *mg, const char *name)
1 {
struct map **mapp;
4 if (mg->maps_by_name == NULL &&
5 map__groups__sort_by_name_from_rbtree(mg))
6 return NULL;
8 mapp = bsearch(name, mg->maps_by_name, mg->nr_maps, sizeof(*mapp), map__strcmp_name);
9 if (mapp)
10 return *mapp;
11 return NULL;
12 }
struct map *map_groups__find_by_name(struct map_groups *mg, const char *name)
{
# perf probe -x ~/bin/perf 'found=__map_groups__find_by_name:10 name:string'
Added new event:
probe_perf:found (on __map_groups__find_by_name:10 in /home/acme/bin/perf with name:string)
You can now use it in all perf tools, such as:
perf record -e probe_perf:found -aR sleep 1
#
# perf probe -x ~/bin/perf -L map_groups__find_by_name
<map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0>
0 struct map *map_groups__find_by_name(struct map_groups *mg, const char *name)
1 {
2 struct maps *maps = &mg->maps;
struct map *map;
5 down_read(&maps->lock);
7 if (mg->last_search_by_name && strcmp(mg->last_search_by_name->dso->short_name, name) == 0) {
8 map = mg->last_search_by_name;
9 goto out_unlock;
}
/*
* If we have mg->maps_by_name, then the name isn't in the rbtree,
* as mg->maps_by_name mirrors the rbtree when lookups by name are
* made.
*/
16 map = __map_groups__find_by_name(mg, name);
17 if (map || mg->maps_by_name != NULL)
18 goto out_unlock;
/* Fallback to traversing the rbtree... */
21 maps__for_each_entry(maps, map)
22 if (strcmp(map->dso->short_name, name) == 0) {
23 mg->last_search_by_name = map;
24 goto out_unlock;
}
27 map = NULL;
out_unlock:
30 up_read(&maps->lock);
31 return map;
32 }
int dso__load_vmlinux(struct dso *dso, struct map *map,
const char *vmlinux, bool vmlinux_allocated)
# perf probe -x ~/bin/perf 'fallback=map_groups__find_by_name:21 name:string'
Added new events:
probe_perf:fallback (on map_groups__find_by_name:21 in /home/acme/bin/perf with name:string)
probe_perf:fallback_1 (on map_groups__find_by_name:21 in /home/acme/bin/perf with name:string)
You can now use it in all perf tools, such as:
perf record -e probe_perf:fallback_1 -aR sleep 1
#
# perf probe -l
probe_perf:fallback (on map_groups__find_by_name:21@util/symbol.c in /home/acme/bin/perf with name_string)
probe_perf:fallback_1 (on map_groups__find_by_name:21@util/symbol.c in /home/acme/bin/perf with name_string)
probe_perf:found (on __map_groups__find_by_name:10@util/symbol.c in /home/acme/bin/perf with name_string)
#
# perf stat -e probe_perf:*
Now run 'perf top' in another term and then, after a while, stop 'perf stat':
Furthermore, if we ask for interval printing, we can see that that is done just
at the start of the workload:
# perf stat -I1000 -e probe_perf:*
# time counts unit events
1.000319513 0 probe_perf:found
1.000319513 0 probe_perf:fallback_1
1.000319513 0 probe_perf:fallback
2.001868092 23,251 probe_perf:found
2.001868092 0 probe_perf:fallback_1
2.001868092 0 probe_perf:fallback
3.002901597 0 probe_perf:found
3.002901597 0 probe_perf:fallback_1
3.002901597 0 probe_perf:fallback
4.003358591 0 probe_perf:found
4.003358591 0 probe_perf:fallback_1
4.003358591 0 probe_perf:fallback
^C
#
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: https://lkml.kernel.org/n/tip-c5lmbyr14x448rcfii7y6t3k@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-17 22:38:13 +08:00
|
|
|
if (mapp)
|
|
|
|
return *mapp;
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2019-11-26 09:21:28 +08:00
|
|
|
struct map *maps__find_by_name(struct maps *maps, const char *name)
|
2009-10-02 14:29:58 +08:00
|
|
|
{
|
2015-05-22 22:52:22 +08:00
|
|
|
struct map *map;
|
2009-10-02 14:29:58 +08:00
|
|
|
|
2017-04-05 00:15:04 +08:00
|
|
|
down_read(&maps->lock);
|
2015-05-23 00:45:24 +08:00
|
|
|
|
2019-11-26 09:21:28 +08:00
|
|
|
if (maps->last_search_by_name && strcmp(maps->last_search_by_name->dso->short_name, name) == 0) {
|
|
|
|
map = maps->last_search_by_name;
|
perf map_groups: Add a front end cache for map lookups by name
Lets see if it helps:
First look at the probeable lines for the function that does lookups by
name in a map_groups struct:
# perf probe -x ~/bin/perf -L map_groups__find_by_name
<map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0>
0 struct map *map_groups__find_by_name(struct map_groups *mg, const char *name)
1 {
2 struct maps *maps = &mg->maps;
struct map *map;
5 down_read(&maps->lock);
7 if (mg->last_search_by_name && strcmp(mg->last_search_by_name->dso->short_name, name) == 0) {
8 map = mg->last_search_by_name;
9 goto out_unlock;
}
12 maps__for_each_entry(maps, map)
13 if (strcmp(map->dso->short_name, name) == 0) {
14 mg->last_search_by_name = map;
15 goto out_unlock;
}
18 map = NULL;
out_unlock:
21 up_read(&maps->lock);
22 return map;
23 }
int dso__load_vmlinux(struct dso *dso, struct map *map,
const char *vmlinux, bool vmlinux_allocated)
#
Now add a probe to the place where we reuse the last search:
# perf probe -x ~/bin/perf map_groups__find_by_name:8
Added new event:
probe_perf:map_groups__find_by_name (on map_groups__find_by_name:8 in /home/acme/bin/perf)
You can now use it in all perf tools, such as:
perf record -e probe_perf:map_groups__find_by_name -aR sleep 1
#
Now lets do a system wide 'perf stat' counting those events:
# perf stat -e probe_perf:*
Leave it running and lets do a 'perf top', then, after a while, stop the
'perf stat':
# perf stat -e probe_perf:*
^C
Performance counter stats for 'system wide':
3,603 probe_perf:map_groups__find_by_name
44.565253139 seconds time elapsed
#
yeah, good to have.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: https://lkml.kernel.org/n/tip-tcz37g3nxv3tvxw3q90vga3p@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-14 03:33:33 +08:00
|
|
|
goto out_unlock;
|
|
|
|
}
|
perf map_groups: Auto sort maps by name, if needed
There are still lots of lookups by name, even if just when loading
vmlinux, till that code is studied to figure out if its possible to do
away with those map lookup by names, provide a way to sort it using
libc's qsort/bsearch.
Doing it at the first lookup defers the sorting a bit, and as the code
stands now, is never done for user maps, just for the kernel ones.
# perf probe -l
# perf probe -x ~/bin/perf -L __map_groups__find_by_name
<__map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0>
0 static struct map *__map_groups__find_by_name(struct map_groups *mg, const char *name)
1 {
struct map **mapp;
4 if (mg->maps_by_name == NULL &&
5 map__groups__sort_by_name_from_rbtree(mg))
6 return NULL;
8 mapp = bsearch(name, mg->maps_by_name, mg->nr_maps, sizeof(*mapp), map__strcmp_name);
9 if (mapp)
10 return *mapp;
11 return NULL;
12 }
struct map *map_groups__find_by_name(struct map_groups *mg, const char *name)
{
# perf probe -x ~/bin/perf 'found=__map_groups__find_by_name:10 name:string'
Added new event:
probe_perf:found (on __map_groups__find_by_name:10 in /home/acme/bin/perf with name:string)
You can now use it in all perf tools, such as:
perf record -e probe_perf:found -aR sleep 1
#
# perf probe -x ~/bin/perf -L map_groups__find_by_name
<map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0>
0 struct map *map_groups__find_by_name(struct map_groups *mg, const char *name)
1 {
2 struct maps *maps = &mg->maps;
struct map *map;
5 down_read(&maps->lock);
7 if (mg->last_search_by_name && strcmp(mg->last_search_by_name->dso->short_name, name) == 0) {
8 map = mg->last_search_by_name;
9 goto out_unlock;
}
/*
* If we have mg->maps_by_name, then the name isn't in the rbtree,
* as mg->maps_by_name mirrors the rbtree when lookups by name are
* made.
*/
16 map = __map_groups__find_by_name(mg, name);
17 if (map || mg->maps_by_name != NULL)
18 goto out_unlock;
/* Fallback to traversing the rbtree... */
21 maps__for_each_entry(maps, map)
22 if (strcmp(map->dso->short_name, name) == 0) {
23 mg->last_search_by_name = map;
24 goto out_unlock;
}
27 map = NULL;
out_unlock:
30 up_read(&maps->lock);
31 return map;
32 }
int dso__load_vmlinux(struct dso *dso, struct map *map,
const char *vmlinux, bool vmlinux_allocated)
# perf probe -x ~/bin/perf 'fallback=map_groups__find_by_name:21 name:string'
Added new events:
probe_perf:fallback (on map_groups__find_by_name:21 in /home/acme/bin/perf with name:string)
probe_perf:fallback_1 (on map_groups__find_by_name:21 in /home/acme/bin/perf with name:string)
You can now use it in all perf tools, such as:
perf record -e probe_perf:fallback_1 -aR sleep 1
#
# perf probe -l
probe_perf:fallback (on map_groups__find_by_name:21@util/symbol.c in /home/acme/bin/perf with name_string)
probe_perf:fallback_1 (on map_groups__find_by_name:21@util/symbol.c in /home/acme/bin/perf with name_string)
probe_perf:found (on __map_groups__find_by_name:10@util/symbol.c in /home/acme/bin/perf with name_string)
#
# perf stat -e probe_perf:*
Now run 'perf top' in another term and then, after a while, stop 'perf stat':
Furthermore, if we ask for interval printing, we can see that that is done just
at the start of the workload:
# perf stat -I1000 -e probe_perf:*
# time counts unit events
1.000319513 0 probe_perf:found
1.000319513 0 probe_perf:fallback_1
1.000319513 0 probe_perf:fallback
2.001868092 23,251 probe_perf:found
2.001868092 0 probe_perf:fallback_1
2.001868092 0 probe_perf:fallback
3.002901597 0 probe_perf:found
3.002901597 0 probe_perf:fallback_1
3.002901597 0 probe_perf:fallback
4.003358591 0 probe_perf:found
4.003358591 0 probe_perf:fallback_1
4.003358591 0 probe_perf:fallback
^C
#
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: https://lkml.kernel.org/n/tip-c5lmbyr14x448rcfii7y6t3k@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-17 22:38:13 +08:00
|
|
|
/*
|
2019-11-26 09:21:28 +08:00
|
|
|
* If we have maps->maps_by_name, then the name isn't in the rbtree,
|
|
|
|
* as maps->maps_by_name mirrors the rbtree when lookups by name are
|
perf map_groups: Auto sort maps by name, if needed
There are still lots of lookups by name, even if just when loading
vmlinux, till that code is studied to figure out if its possible to do
away with those map lookup by names, provide a way to sort it using
libc's qsort/bsearch.
Doing it at the first lookup defers the sorting a bit, and as the code
stands now, is never done for user maps, just for the kernel ones.
# perf probe -l
# perf probe -x ~/bin/perf -L __map_groups__find_by_name
<__map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0>
0 static struct map *__map_groups__find_by_name(struct map_groups *mg, const char *name)
1 {
struct map **mapp;
4 if (mg->maps_by_name == NULL &&
5 map__groups__sort_by_name_from_rbtree(mg))
6 return NULL;
8 mapp = bsearch(name, mg->maps_by_name, mg->nr_maps, sizeof(*mapp), map__strcmp_name);
9 if (mapp)
10 return *mapp;
11 return NULL;
12 }
struct map *map_groups__find_by_name(struct map_groups *mg, const char *name)
{
# perf probe -x ~/bin/perf 'found=__map_groups__find_by_name:10 name:string'
Added new event:
probe_perf:found (on __map_groups__find_by_name:10 in /home/acme/bin/perf with name:string)
You can now use it in all perf tools, such as:
perf record -e probe_perf:found -aR sleep 1
#
# perf probe -x ~/bin/perf -L map_groups__find_by_name
<map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0>
0 struct map *map_groups__find_by_name(struct map_groups *mg, const char *name)
1 {
2 struct maps *maps = &mg->maps;
struct map *map;
5 down_read(&maps->lock);
7 if (mg->last_search_by_name && strcmp(mg->last_search_by_name->dso->short_name, name) == 0) {
8 map = mg->last_search_by_name;
9 goto out_unlock;
}
/*
* If we have mg->maps_by_name, then the name isn't in the rbtree,
* as mg->maps_by_name mirrors the rbtree when lookups by name are
* made.
*/
16 map = __map_groups__find_by_name(mg, name);
17 if (map || mg->maps_by_name != NULL)
18 goto out_unlock;
/* Fallback to traversing the rbtree... */
21 maps__for_each_entry(maps, map)
22 if (strcmp(map->dso->short_name, name) == 0) {
23 mg->last_search_by_name = map;
24 goto out_unlock;
}
27 map = NULL;
out_unlock:
30 up_read(&maps->lock);
31 return map;
32 }
int dso__load_vmlinux(struct dso *dso, struct map *map,
const char *vmlinux, bool vmlinux_allocated)
# perf probe -x ~/bin/perf 'fallback=map_groups__find_by_name:21 name:string'
Added new events:
probe_perf:fallback (on map_groups__find_by_name:21 in /home/acme/bin/perf with name:string)
probe_perf:fallback_1 (on map_groups__find_by_name:21 in /home/acme/bin/perf with name:string)
You can now use it in all perf tools, such as:
perf record -e probe_perf:fallback_1 -aR sleep 1
#
# perf probe -l
probe_perf:fallback (on map_groups__find_by_name:21@util/symbol.c in /home/acme/bin/perf with name_string)
probe_perf:fallback_1 (on map_groups__find_by_name:21@util/symbol.c in /home/acme/bin/perf with name_string)
probe_perf:found (on __map_groups__find_by_name:10@util/symbol.c in /home/acme/bin/perf with name_string)
#
# perf stat -e probe_perf:*
Now run 'perf top' in another term and then, after a while, stop 'perf stat':
Furthermore, if we ask for interval printing, we can see that that is done just
at the start of the workload:
# perf stat -I1000 -e probe_perf:*
# time counts unit events
1.000319513 0 probe_perf:found
1.000319513 0 probe_perf:fallback_1
1.000319513 0 probe_perf:fallback
2.001868092 23,251 probe_perf:found
2.001868092 0 probe_perf:fallback_1
2.001868092 0 probe_perf:fallback
3.002901597 0 probe_perf:found
3.002901597 0 probe_perf:fallback_1
3.002901597 0 probe_perf:fallback
4.003358591 0 probe_perf:found
4.003358591 0 probe_perf:fallback_1
4.003358591 0 probe_perf:fallback
^C
#
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: https://lkml.kernel.org/n/tip-c5lmbyr14x448rcfii7y6t3k@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-17 22:38:13 +08:00
|
|
|
* made.
|
|
|
|
*/
|
2019-11-26 09:21:28 +08:00
|
|
|
map = __maps__find_by_name(maps, name);
|
|
|
|
if (map || maps->maps_by_name != NULL)
|
perf map_groups: Auto sort maps by name, if needed
There are still lots of lookups by name, even if just when loading
vmlinux, till that code is studied to figure out if its possible to do
away with those map lookup by names, provide a way to sort it using
libc's qsort/bsearch.
Doing it at the first lookup defers the sorting a bit, and as the code
stands now, is never done for user maps, just for the kernel ones.
# perf probe -l
# perf probe -x ~/bin/perf -L __map_groups__find_by_name
<__map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0>
0 static struct map *__map_groups__find_by_name(struct map_groups *mg, const char *name)
1 {
struct map **mapp;
4 if (mg->maps_by_name == NULL &&
5 map__groups__sort_by_name_from_rbtree(mg))
6 return NULL;
8 mapp = bsearch(name, mg->maps_by_name, mg->nr_maps, sizeof(*mapp), map__strcmp_name);
9 if (mapp)
10 return *mapp;
11 return NULL;
12 }
struct map *map_groups__find_by_name(struct map_groups *mg, const char *name)
{
# perf probe -x ~/bin/perf 'found=__map_groups__find_by_name:10 name:string'
Added new event:
probe_perf:found (on __map_groups__find_by_name:10 in /home/acme/bin/perf with name:string)
You can now use it in all perf tools, such as:
perf record -e probe_perf:found -aR sleep 1
#
# perf probe -x ~/bin/perf -L map_groups__find_by_name
<map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0>
0 struct map *map_groups__find_by_name(struct map_groups *mg, const char *name)
1 {
2 struct maps *maps = &mg->maps;
struct map *map;
5 down_read(&maps->lock);
7 if (mg->last_search_by_name && strcmp(mg->last_search_by_name->dso->short_name, name) == 0) {
8 map = mg->last_search_by_name;
9 goto out_unlock;
}
/*
* If we have mg->maps_by_name, then the name isn't in the rbtree,
* as mg->maps_by_name mirrors the rbtree when lookups by name are
* made.
*/
16 map = __map_groups__find_by_name(mg, name);
17 if (map || mg->maps_by_name != NULL)
18 goto out_unlock;
/* Fallback to traversing the rbtree... */
21 maps__for_each_entry(maps, map)
22 if (strcmp(map->dso->short_name, name) == 0) {
23 mg->last_search_by_name = map;
24 goto out_unlock;
}
27 map = NULL;
out_unlock:
30 up_read(&maps->lock);
31 return map;
32 }
int dso__load_vmlinux(struct dso *dso, struct map *map,
const char *vmlinux, bool vmlinux_allocated)
# perf probe -x ~/bin/perf 'fallback=map_groups__find_by_name:21 name:string'
Added new events:
probe_perf:fallback (on map_groups__find_by_name:21 in /home/acme/bin/perf with name:string)
probe_perf:fallback_1 (on map_groups__find_by_name:21 in /home/acme/bin/perf with name:string)
You can now use it in all perf tools, such as:
perf record -e probe_perf:fallback_1 -aR sleep 1
#
# perf probe -l
probe_perf:fallback (on map_groups__find_by_name:21@util/symbol.c in /home/acme/bin/perf with name_string)
probe_perf:fallback_1 (on map_groups__find_by_name:21@util/symbol.c in /home/acme/bin/perf with name_string)
probe_perf:found (on __map_groups__find_by_name:10@util/symbol.c in /home/acme/bin/perf with name_string)
#
# perf stat -e probe_perf:*
Now run 'perf top' in another term and then, after a while, stop 'perf stat':
Furthermore, if we ask for interval printing, we can see that that is done just
at the start of the workload:
# perf stat -I1000 -e probe_perf:*
# time counts unit events
1.000319513 0 probe_perf:found
1.000319513 0 probe_perf:fallback_1
1.000319513 0 probe_perf:fallback
2.001868092 23,251 probe_perf:found
2.001868092 0 probe_perf:fallback_1
2.001868092 0 probe_perf:fallback
3.002901597 0 probe_perf:found
3.002901597 0 probe_perf:fallback_1
3.002901597 0 probe_perf:fallback
4.003358591 0 probe_perf:found
4.003358591 0 probe_perf:fallback_1
4.003358591 0 probe_perf:fallback
^C
#
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: https://lkml.kernel.org/n/tip-c5lmbyr14x448rcfii7y6t3k@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-17 22:38:13 +08:00
|
|
|
goto out_unlock;
|
perf map_groups: Add a front end cache for map lookups by name
Lets see if it helps:
First look at the probeable lines for the function that does lookups by
name in a map_groups struct:
# perf probe -x ~/bin/perf -L map_groups__find_by_name
<map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0>
0 struct map *map_groups__find_by_name(struct map_groups *mg, const char *name)
1 {
2 struct maps *maps = &mg->maps;
struct map *map;
5 down_read(&maps->lock);
7 if (mg->last_search_by_name && strcmp(mg->last_search_by_name->dso->short_name, name) == 0) {
8 map = mg->last_search_by_name;
9 goto out_unlock;
}
12 maps__for_each_entry(maps, map)
13 if (strcmp(map->dso->short_name, name) == 0) {
14 mg->last_search_by_name = map;
15 goto out_unlock;
}
18 map = NULL;
out_unlock:
21 up_read(&maps->lock);
22 return map;
23 }
int dso__load_vmlinux(struct dso *dso, struct map *map,
const char *vmlinux, bool vmlinux_allocated)
#
Now add a probe to the place where we reuse the last search:
# perf probe -x ~/bin/perf map_groups__find_by_name:8
Added new event:
probe_perf:map_groups__find_by_name (on map_groups__find_by_name:8 in /home/acme/bin/perf)
You can now use it in all perf tools, such as:
perf record -e probe_perf:map_groups__find_by_name -aR sleep 1
#
Now lets do a system wide 'perf stat' counting those events:
# perf stat -e probe_perf:*
Leave it running and lets do a 'perf top', then, after a while, stop the
'perf stat':
# perf stat -e probe_perf:*
^C
Performance counter stats for 'system wide':
3,603 probe_perf:map_groups__find_by_name
44.565253139 seconds time elapsed
#
yeah, good to have.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: https://lkml.kernel.org/n/tip-tcz37g3nxv3tvxw3q90vga3p@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-14 03:33:33 +08:00
|
|
|
|
perf map_groups: Auto sort maps by name, if needed
There are still lots of lookups by name, even if just when loading
vmlinux, till that code is studied to figure out if its possible to do
away with those map lookup by names, provide a way to sort it using
libc's qsort/bsearch.
Doing it at the first lookup defers the sorting a bit, and as the code
stands now, is never done for user maps, just for the kernel ones.
# perf probe -l
# perf probe -x ~/bin/perf -L __map_groups__find_by_name
<__map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0>
0 static struct map *__map_groups__find_by_name(struct map_groups *mg, const char *name)
1 {
struct map **mapp;
4 if (mg->maps_by_name == NULL &&
5 map__groups__sort_by_name_from_rbtree(mg))
6 return NULL;
8 mapp = bsearch(name, mg->maps_by_name, mg->nr_maps, sizeof(*mapp), map__strcmp_name);
9 if (mapp)
10 return *mapp;
11 return NULL;
12 }
struct map *map_groups__find_by_name(struct map_groups *mg, const char *name)
{
# perf probe -x ~/bin/perf 'found=__map_groups__find_by_name:10 name:string'
Added new event:
probe_perf:found (on __map_groups__find_by_name:10 in /home/acme/bin/perf with name:string)
You can now use it in all perf tools, such as:
perf record -e probe_perf:found -aR sleep 1
#
# perf probe -x ~/bin/perf -L map_groups__find_by_name
<map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0>
0 struct map *map_groups__find_by_name(struct map_groups *mg, const char *name)
1 {
2 struct maps *maps = &mg->maps;
struct map *map;
5 down_read(&maps->lock);
7 if (mg->last_search_by_name && strcmp(mg->last_search_by_name->dso->short_name, name) == 0) {
8 map = mg->last_search_by_name;
9 goto out_unlock;
}
/*
* If we have mg->maps_by_name, then the name isn't in the rbtree,
* as mg->maps_by_name mirrors the rbtree when lookups by name are
* made.
*/
16 map = __map_groups__find_by_name(mg, name);
17 if (map || mg->maps_by_name != NULL)
18 goto out_unlock;
/* Fallback to traversing the rbtree... */
21 maps__for_each_entry(maps, map)
22 if (strcmp(map->dso->short_name, name) == 0) {
23 mg->last_search_by_name = map;
24 goto out_unlock;
}
27 map = NULL;
out_unlock:
30 up_read(&maps->lock);
31 return map;
32 }
int dso__load_vmlinux(struct dso *dso, struct map *map,
const char *vmlinux, bool vmlinux_allocated)
# perf probe -x ~/bin/perf 'fallback=map_groups__find_by_name:21 name:string'
Added new events:
probe_perf:fallback (on map_groups__find_by_name:21 in /home/acme/bin/perf with name:string)
probe_perf:fallback_1 (on map_groups__find_by_name:21 in /home/acme/bin/perf with name:string)
You can now use it in all perf tools, such as:
perf record -e probe_perf:fallback_1 -aR sleep 1
#
# perf probe -l
probe_perf:fallback (on map_groups__find_by_name:21@util/symbol.c in /home/acme/bin/perf with name_string)
probe_perf:fallback_1 (on map_groups__find_by_name:21@util/symbol.c in /home/acme/bin/perf with name_string)
probe_perf:found (on __map_groups__find_by_name:10@util/symbol.c in /home/acme/bin/perf with name_string)
#
# perf stat -e probe_perf:*
Now run 'perf top' in another term and then, after a while, stop 'perf stat':
Furthermore, if we ask for interval printing, we can see that that is done just
at the start of the workload:
# perf stat -I1000 -e probe_perf:*
# time counts unit events
1.000319513 0 probe_perf:found
1.000319513 0 probe_perf:fallback_1
1.000319513 0 probe_perf:fallback
2.001868092 23,251 probe_perf:found
2.001868092 0 probe_perf:fallback_1
2.001868092 0 probe_perf:fallback
3.002901597 0 probe_perf:found
3.002901597 0 probe_perf:fallback_1
3.002901597 0 probe_perf:fallback
4.003358591 0 probe_perf:found
4.003358591 0 probe_perf:fallback_1
4.003358591 0 probe_perf:fallback
^C
#
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: https://lkml.kernel.org/n/tip-c5lmbyr14x448rcfii7y6t3k@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-17 22:38:13 +08:00
|
|
|
/* Fallback to traversing the rbtree... */
|
2019-11-14 03:16:25 +08:00
|
|
|
maps__for_each_entry(maps, map)
|
perf map_groups: Add a front end cache for map lookups by name
Lets see if it helps:
First look at the probeable lines for the function that does lookups by
name in a map_groups struct:
# perf probe -x ~/bin/perf -L map_groups__find_by_name
<map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0>
0 struct map *map_groups__find_by_name(struct map_groups *mg, const char *name)
1 {
2 struct maps *maps = &mg->maps;
struct map *map;
5 down_read(&maps->lock);
7 if (mg->last_search_by_name && strcmp(mg->last_search_by_name->dso->short_name, name) == 0) {
8 map = mg->last_search_by_name;
9 goto out_unlock;
}
12 maps__for_each_entry(maps, map)
13 if (strcmp(map->dso->short_name, name) == 0) {
14 mg->last_search_by_name = map;
15 goto out_unlock;
}
18 map = NULL;
out_unlock:
21 up_read(&maps->lock);
22 return map;
23 }
int dso__load_vmlinux(struct dso *dso, struct map *map,
const char *vmlinux, bool vmlinux_allocated)
#
Now add a probe to the place where we reuse the last search:
# perf probe -x ~/bin/perf map_groups__find_by_name:8
Added new event:
probe_perf:map_groups__find_by_name (on map_groups__find_by_name:8 in /home/acme/bin/perf)
You can now use it in all perf tools, such as:
perf record -e probe_perf:map_groups__find_by_name -aR sleep 1
#
Now lets do a system wide 'perf stat' counting those events:
# perf stat -e probe_perf:*
Leave it running and lets do a 'perf top', then, after a while, stop the
'perf stat':
# perf stat -e probe_perf:*
^C
Performance counter stats for 'system wide':
3,603 probe_perf:map_groups__find_by_name
44.565253139 seconds time elapsed
#
yeah, good to have.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: https://lkml.kernel.org/n/tip-tcz37g3nxv3tvxw3q90vga3p@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-14 03:33:33 +08:00
|
|
|
if (strcmp(map->dso->short_name, name) == 0) {
|
2019-11-26 09:21:28 +08:00
|
|
|
maps->last_search_by_name = map;
|
2015-05-23 00:45:24 +08:00
|
|
|
goto out_unlock;
|
perf map_groups: Add a front end cache for map lookups by name
Lets see if it helps:
First look at the probeable lines for the function that does lookups by
name in a map_groups struct:
# perf probe -x ~/bin/perf -L map_groups__find_by_name
<map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0>
0 struct map *map_groups__find_by_name(struct map_groups *mg, const char *name)
1 {
2 struct maps *maps = &mg->maps;
struct map *map;
5 down_read(&maps->lock);
7 if (mg->last_search_by_name && strcmp(mg->last_search_by_name->dso->short_name, name) == 0) {
8 map = mg->last_search_by_name;
9 goto out_unlock;
}
12 maps__for_each_entry(maps, map)
13 if (strcmp(map->dso->short_name, name) == 0) {
14 mg->last_search_by_name = map;
15 goto out_unlock;
}
18 map = NULL;
out_unlock:
21 up_read(&maps->lock);
22 return map;
23 }
int dso__load_vmlinux(struct dso *dso, struct map *map,
const char *vmlinux, bool vmlinux_allocated)
#
Now add a probe to the place where we reuse the last search:
# perf probe -x ~/bin/perf map_groups__find_by_name:8
Added new event:
probe_perf:map_groups__find_by_name (on map_groups__find_by_name:8 in /home/acme/bin/perf)
You can now use it in all perf tools, such as:
perf record -e probe_perf:map_groups__find_by_name -aR sleep 1
#
Now lets do a system wide 'perf stat' counting those events:
# perf stat -e probe_perf:*
Leave it running and lets do a 'perf top', then, after a while, stop the
'perf stat':
# perf stat -e probe_perf:*
^C
Performance counter stats for 'system wide':
3,603 probe_perf:map_groups__find_by_name
44.565253139 seconds time elapsed
#
yeah, good to have.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: https://lkml.kernel.org/n/tip-tcz37g3nxv3tvxw3q90vga3p@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-14 03:33:33 +08:00
|
|
|
}
|
2009-10-02 14:29:58 +08:00
|
|
|
|
2015-05-23 00:45:24 +08:00
|
|
|
map = NULL;
|
|
|
|
|
|
|
|
out_unlock:
|
2017-04-05 00:15:04 +08:00
|
|
|
up_read(&maps->lock);
|
2015-05-23 00:45:24 +08:00
|
|
|
return map;
|
2009-10-02 14:29:58 +08:00
|
|
|
}
|
|
|
|
|
2011-03-31 21:56:28 +08:00
|
|
|
int dso__load_vmlinux(struct dso *dso, struct map *map,
|
2016-09-02 06:25:52 +08:00
|
|
|
const char *vmlinux, bool vmlinux_allocated)
|
2009-05-29 01:55:04 +08:00
|
|
|
{
|
2012-08-11 06:22:57 +08:00
|
|
|
int err = -1;
|
|
|
|
struct symsrc ss;
|
2010-12-10 04:27:07 +08:00
|
|
|
char symfs_vmlinux[PATH_MAX];
|
2012-08-11 06:22:58 +08:00
|
|
|
enum dso_binary_type symtab_type;
|
2009-05-29 01:55:04 +08:00
|
|
|
|
2013-07-17 16:08:15 +08:00
|
|
|
if (vmlinux[0] == '/')
|
|
|
|
snprintf(symfs_vmlinux, sizeof(symfs_vmlinux), "%s", vmlinux);
|
|
|
|
else
|
2014-07-29 21:21:58 +08:00
|
|
|
symbol__join_symfs(symfs_vmlinux, vmlinux);
|
2009-05-29 01:55:04 +08:00
|
|
|
|
2020-08-08 20:21:54 +08:00
|
|
|
if (dso->kernel == DSO_SPACE__KERNEL_GUEST)
|
2012-08-11 06:22:58 +08:00
|
|
|
symtab_type = DSO_BINARY_TYPE__GUEST_VMLINUX;
|
2012-08-11 06:22:56 +08:00
|
|
|
else
|
2012-08-11 06:22:58 +08:00
|
|
|
symtab_type = DSO_BINARY_TYPE__VMLINUX;
|
2012-08-11 06:22:56 +08:00
|
|
|
|
2012-08-11 06:22:58 +08:00
|
|
|
if (symsrc__init(&ss, dso, symfs_vmlinux, symtab_type))
|
2012-08-11 06:22:57 +08:00
|
|
|
return -1;
|
|
|
|
|
2016-09-02 06:25:52 +08:00
|
|
|
err = dso__load_sym(dso, map, &ss, &ss, 0);
|
2012-08-11 06:22:57 +08:00
|
|
|
symsrc__destroy(&ss);
|
2009-05-29 01:55:04 +08:00
|
|
|
|
2012-08-11 06:22:54 +08:00
|
|
|
if (err > 0) {
|
2020-08-08 20:21:54 +08:00
|
|
|
if (dso->kernel == DSO_SPACE__KERNEL_GUEST)
|
2013-12-18 03:14:07 +08:00
|
|
|
dso->binary_type = DSO_BINARY_TYPE__GUEST_VMLINUX;
|
2013-08-07 19:38:47 +08:00
|
|
|
else
|
2013-12-18 03:14:07 +08:00
|
|
|
dso->binary_type = DSO_BINARY_TYPE__VMLINUX;
|
2013-12-11 02:19:23 +08:00
|
|
|
dso__set_long_name(dso, vmlinux, vmlinux_allocated);
|
2018-04-27 03:52:34 +08:00
|
|
|
dso__set_loaded(dso);
|
2010-12-10 04:27:07 +08:00
|
|
|
pr_debug("Using %s for symbols\n", symfs_vmlinux);
|
2012-08-11 06:22:54 +08:00
|
|
|
}
|
2010-02-23 03:15:39 +08:00
|
|
|
|
2009-05-29 01:55:04 +08:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2016-09-02 06:25:52 +08:00
|
|
|
int dso__load_vmlinux_path(struct dso *dso, struct map *map)
|
2010-01-28 07:05:50 +08:00
|
|
|
{
|
|
|
|
int i, err = 0;
|
2014-11-04 09:14:32 +08:00
|
|
|
char *filename = NULL;
|
2010-01-28 07:05:50 +08:00
|
|
|
|
perf symbols: Try the .debug/ DSO cache as a last resort
Not as the first attempt at finding a vmlinux for the running kernel,
this way we get a more informative filename to present in tools, it will
check that the build-id is the same as the one previously loaded in the
DSO in dso->build_id, reading from /sys/kernel/notes, for instance.
E.g. in the annotation TUI, going from 'perf top', for the scsi_sg_alloc
kernel function, in the first line:
Before:
scsi_sg_alloc /root/.debug/.build-id/28/2777c262e6b3c0451375163c9a81c893218ab1
After:
scsi_sg_alloc /lib/modules/4.3.0-rc1+/build/vmlinux
And:
# ls -la /root/.debug/.build-id/28/2777c262e6b3c0451375163c9a81c893218ab1
lrwxrwxrwx. 1 root root 81 Sep 22 16:11 /root/.debug/.build-id/28/2777c262e6b3c0451375163c9a81c893218ab1 -> ../../home/git/build/v4.3.0-rc1+/vmlinux/282777c262e6b3c0451375163c9a81c893218ab1
# file ~/.debug/home/git/build/v4.3.0-rc1+/vmlinux/282777c262e6b3c0451375163c9a81c893218ab1
/root/.debug/home/git/build/v4.3.0-rc1+/vmlinux/282777c262e6b3c0451375163c9a81c893218ab1: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, BuildID[sha1]=282777c262e6b3c0451375163c9a81c893218ab1, not stripped
#
The same as:
# file /lib/modules/4.3.0-rc1+/build/vmlinux
/lib/modules/4.3.0-rc1+/build/vmlinux: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, BuildID[sha1]=282777c262e6b3c0451375163c9a81c893218ab1, not stripped
Furthermore:
# sha256sum /lib/modules/4.3.0-rc1+/build/vmlinux
e7a789bbdc61029ec09140c228e1dd651271f38ef0b8416c0b7d5ff727b98be2 /lib/modules/4.3.0-rc1+/build/vmlinux
# sha256sum ~/.debug/home/git/build/v4.3.0-rc1+/vmlinux/282777c262e6b3c0451375163c9a81c893218ab1
e7a789bbdc61029ec09140c228e1dd651271f38ef0b8416c0b7d5ff727b98be2 /root/.debug/home/git/build/v4.3.0-rc1+/vmlinux/282777c262e6b3c0451375163c9a81c893218ab1
[root@zoo new]#
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-9y42ikzq3jisiddoi6f07n8z@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-10-12 04:17:24 +08:00
|
|
|
pr_debug("Looking at the vmlinux_path (%d entries long)\n",
|
|
|
|
vmlinux_path__nr_entries + 1);
|
|
|
|
|
|
|
|
for (i = 0; i < vmlinux_path__nr_entries; ++i) {
|
2016-09-02 06:25:52 +08:00
|
|
|
err = dso__load_vmlinux(dso, map, vmlinux_path[i], false);
|
perf symbols: Try the .debug/ DSO cache as a last resort
Not as the first attempt at finding a vmlinux for the running kernel,
this way we get a more informative filename to present in tools, it will
check that the build-id is the same as the one previously loaded in the
DSO in dso->build_id, reading from /sys/kernel/notes, for instance.
E.g. in the annotation TUI, going from 'perf top', for the scsi_sg_alloc
kernel function, in the first line:
Before:
scsi_sg_alloc /root/.debug/.build-id/28/2777c262e6b3c0451375163c9a81c893218ab1
After:
scsi_sg_alloc /lib/modules/4.3.0-rc1+/build/vmlinux
And:
# ls -la /root/.debug/.build-id/28/2777c262e6b3c0451375163c9a81c893218ab1
lrwxrwxrwx. 1 root root 81 Sep 22 16:11 /root/.debug/.build-id/28/2777c262e6b3c0451375163c9a81c893218ab1 -> ../../home/git/build/v4.3.0-rc1+/vmlinux/282777c262e6b3c0451375163c9a81c893218ab1
# file ~/.debug/home/git/build/v4.3.0-rc1+/vmlinux/282777c262e6b3c0451375163c9a81c893218ab1
/root/.debug/home/git/build/v4.3.0-rc1+/vmlinux/282777c262e6b3c0451375163c9a81c893218ab1: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, BuildID[sha1]=282777c262e6b3c0451375163c9a81c893218ab1, not stripped
#
The same as:
# file /lib/modules/4.3.0-rc1+/build/vmlinux
/lib/modules/4.3.0-rc1+/build/vmlinux: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, BuildID[sha1]=282777c262e6b3c0451375163c9a81c893218ab1, not stripped
Furthermore:
# sha256sum /lib/modules/4.3.0-rc1+/build/vmlinux
e7a789bbdc61029ec09140c228e1dd651271f38ef0b8416c0b7d5ff727b98be2 /lib/modules/4.3.0-rc1+/build/vmlinux
# sha256sum ~/.debug/home/git/build/v4.3.0-rc1+/vmlinux/282777c262e6b3c0451375163c9a81c893218ab1
e7a789bbdc61029ec09140c228e1dd651271f38ef0b8416c0b7d5ff727b98be2 /root/.debug/home/git/build/v4.3.0-rc1+/vmlinux/282777c262e6b3c0451375163c9a81c893218ab1
[root@zoo new]#
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-9y42ikzq3jisiddoi6f07n8z@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-10-12 04:17:24 +08:00
|
|
|
if (err > 0)
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2014-11-04 09:14:32 +08:00
|
|
|
if (!symbol_conf.ignore_vmlinux_buildid)
|
2017-07-06 09:48:13 +08:00
|
|
|
filename = dso__build_id_filename(dso, NULL, 0, false);
|
2010-05-27 00:26:02 +08:00
|
|
|
if (filename != NULL) {
|
2016-09-02 06:25:52 +08:00
|
|
|
err = dso__load_vmlinux(dso, map, filename, true);
|
2013-12-10 22:58:52 +08:00
|
|
|
if (err > 0)
|
2010-05-27 00:26:02 +08:00
|
|
|
goto out;
|
|
|
|
free(filename);
|
|
|
|
}
|
|
|
|
out:
|
2010-01-28 07:05:50 +08:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2016-05-11 21:52:08 +08:00
|
|
|
static bool visible_dir_filter(const char *name, struct dirent *d)
|
|
|
|
{
|
|
|
|
if (d->d_type != DT_DIR)
|
|
|
|
return false;
|
|
|
|
return lsdir_no_dot_filter(name, d);
|
|
|
|
}
|
|
|
|
|
2013-10-14 18:43:43 +08:00
|
|
|
static int find_matching_kcore(struct map *map, char *dir, size_t dir_sz)
|
|
|
|
{
|
|
|
|
char kallsyms_filename[PATH_MAX];
|
|
|
|
int ret = -1;
|
2016-05-11 21:52:08 +08:00
|
|
|
struct strlist *dirs;
|
|
|
|
struct str_node *nd;
|
2013-10-14 18:43:43 +08:00
|
|
|
|
2016-05-11 21:52:08 +08:00
|
|
|
dirs = lsdir(dir, visible_dir_filter);
|
|
|
|
if (!dirs)
|
2013-10-14 18:43:43 +08:00
|
|
|
return -1;
|
|
|
|
|
2016-06-23 22:31:20 +08:00
|
|
|
strlist__for_each_entry(nd, dirs) {
|
2013-10-14 18:43:43 +08:00
|
|
|
scnprintf(kallsyms_filename, sizeof(kallsyms_filename),
|
2016-05-11 21:52:08 +08:00
|
|
|
"%s/%s/kallsyms", dir, nd->s);
|
2014-01-29 22:14:41 +08:00
|
|
|
if (!validate_kcore_addresses(kallsyms_filename, map)) {
|
2013-10-14 18:43:43 +08:00
|
|
|
strlcpy(dir, kallsyms_filename, dir_sz);
|
|
|
|
ret = 0;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-05-11 21:52:08 +08:00
|
|
|
strlist__delete(dirs);
|
2013-10-14 18:43:43 +08:00
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2016-05-28 23:15:13 +08:00
|
|
|
/*
|
|
|
|
* Use open(O_RDONLY) to check readability directly instead of access(R_OK)
|
|
|
|
* since access(R_OK) only checks with real UID/GID but open() use effective
|
|
|
|
* UID/GID and actual capabilities (e.g. /proc/kcore requires CAP_SYS_RAWIO).
|
|
|
|
*/
|
|
|
|
static bool filename__readable(const char *file)
|
|
|
|
{
|
|
|
|
int fd = open(file, O_RDONLY);
|
|
|
|
if (fd < 0)
|
|
|
|
return false;
|
|
|
|
close(fd);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2013-10-14 18:43:43 +08:00
|
|
|
static char *dso__find_kallsyms(struct dso *dso, struct map *map)
|
|
|
|
{
|
2020-10-14 03:24:35 +08:00
|
|
|
struct build_id bid;
|
2016-05-11 21:51:59 +08:00
|
|
|
char sbuild_id[SBUILD_ID_SIZE];
|
2013-10-14 18:43:43 +08:00
|
|
|
bool is_host = false;
|
|
|
|
char path[PATH_MAX];
|
|
|
|
|
|
|
|
if (!dso->has_build_id) {
|
|
|
|
/*
|
|
|
|
* Last resort, if we don't have a build-id and couldn't find
|
|
|
|
* any vmlinux file, try the running kernel kallsyms table.
|
|
|
|
*/
|
|
|
|
goto proc_kallsyms;
|
|
|
|
}
|
|
|
|
|
2020-10-14 03:24:35 +08:00
|
|
|
if (sysfs__read_build_id("/sys/kernel/notes", &bid) == 0)
|
2020-10-14 03:24:38 +08:00
|
|
|
is_host = dso__build_id_equal(dso, &bid);
|
2013-10-14 18:43:43 +08:00
|
|
|
|
2016-05-28 23:15:28 +08:00
|
|
|
/* Try a fast path for /proc/kallsyms if possible */
|
2013-10-14 18:43:43 +08:00
|
|
|
if (is_host) {
|
|
|
|
/*
|
2016-05-28 23:15:13 +08:00
|
|
|
* Do not check the build-id cache, unless we know we cannot use
|
|
|
|
* /proc/kcore or module maps don't match to /proc/kallsyms.
|
|
|
|
* To check readability of /proc/kcore, do not use access(R_OK)
|
|
|
|
* since /proc/kcore requires CAP_SYS_RAWIO to read and access
|
|
|
|
* can't check it.
|
2013-10-14 18:43:43 +08:00
|
|
|
*/
|
2016-05-28 23:15:13 +08:00
|
|
|
if (filename__readable("/proc/kcore") &&
|
|
|
|
!validate_kcore_addresses("/proc/kallsyms", map))
|
|
|
|
goto proc_kallsyms;
|
2013-10-14 18:43:43 +08:00
|
|
|
}
|
|
|
|
|
2020-10-14 03:24:36 +08:00
|
|
|
build_id__sprintf(&dso->bid, sbuild_id);
|
2016-05-28 23:15:28 +08:00
|
|
|
|
2013-11-26 21:19:24 +08:00
|
|
|
/* Find kallsyms in build-id cache with kcore */
|
2016-05-28 23:15:28 +08:00
|
|
|
scnprintf(path, sizeof(path), "%s/%s/%s",
|
|
|
|
buildid_dir, DSO__NAME_KCORE, sbuild_id);
|
|
|
|
|
2013-11-26 21:19:24 +08:00
|
|
|
if (!find_matching_kcore(map, path, sizeof(path)))
|
|
|
|
return strdup(path);
|
|
|
|
|
2016-05-28 23:15:28 +08:00
|
|
|
/* Use current /proc/kallsyms if possible */
|
|
|
|
if (is_host) {
|
|
|
|
proc_kallsyms:
|
|
|
|
return strdup("/proc/kallsyms");
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Finally, find a cache of kallsyms */
|
2016-05-28 23:15:37 +08:00
|
|
|
if (!build_id_cache__kallsyms_path(sbuild_id, path, sizeof(path))) {
|
2013-10-14 18:43:43 +08:00
|
|
|
pr_err("No kallsyms or vmlinux with build-id %s was found\n",
|
|
|
|
sbuild_id);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
return strdup(path);
|
|
|
|
}
|
|
|
|
|
2016-09-02 06:25:52 +08:00
|
|
|
static int dso__load_kernel_sym(struct dso *dso, struct map *map)
|
2009-05-29 01:55:19 +08:00
|
|
|
{
|
2009-11-24 02:39:10 +08:00
|
|
|
int err;
|
2010-01-15 04:30:06 +08:00
|
|
|
const char *kallsyms_filename = NULL;
|
|
|
|
char *kallsyms_allocated_filename = NULL;
|
2020-11-27 01:00:13 +08:00
|
|
|
char *filename = NULL;
|
|
|
|
|
2010-01-19 20:36:14 +08:00
|
|
|
/*
|
2010-12-08 10:39:46 +08:00
|
|
|
* Step 1: if the user specified a kallsyms or vmlinux filename, use
|
|
|
|
* it and only it, reporting errors to the user if it cannot be used.
|
2010-01-19 20:36:14 +08:00
|
|
|
*
|
|
|
|
* For instance, try to analyse an ARM perf.data file _without_ a
|
|
|
|
* build-id, or if the user specifies the wrong path to the right
|
|
|
|
* vmlinux file, obviously we can't fallback to another vmlinux (a
|
|
|
|
* x86_86 one, on the machine where analysis is being performed, say),
|
|
|
|
* or worse, /proc/kallsyms.
|
|
|
|
*
|
|
|
|
* If the specified file _has_ a build-id and there is a build-id
|
|
|
|
* section in the perf.data file, we will still do the expected
|
|
|
|
* validation in dso__load_vmlinux and will bail out if they don't
|
|
|
|
* match.
|
|
|
|
*/
|
2010-12-08 10:39:46 +08:00
|
|
|
if (symbol_conf.kallsyms_name != NULL) {
|
|
|
|
kallsyms_filename = symbol_conf.kallsyms_name;
|
|
|
|
goto do_kallsyms;
|
|
|
|
}
|
|
|
|
|
2013-09-14 16:32:59 +08:00
|
|
|
if (!symbol_conf.ignore_vmlinux && symbol_conf.vmlinux_name != NULL) {
|
2016-09-02 06:25:52 +08:00
|
|
|
return dso__load_vmlinux(dso, map, symbol_conf.vmlinux_name, false);
|
2010-01-19 20:36:14 +08:00
|
|
|
}
|
2009-11-24 02:39:10 +08:00
|
|
|
|
2020-11-27 01:00:13 +08:00
|
|
|
/*
|
|
|
|
* Before checking on common vmlinux locations, check if it's
|
|
|
|
* stored as standard build id binary (not kallsyms) under
|
|
|
|
* .debug cache.
|
|
|
|
*/
|
|
|
|
if (!symbol_conf.ignore_vmlinux_buildid)
|
|
|
|
filename = __dso__build_id_filename(dso, NULL, 0, false, false);
|
|
|
|
if (filename != NULL) {
|
|
|
|
err = dso__load_vmlinux(dso, map, filename, true);
|
|
|
|
if (err > 0)
|
|
|
|
return err;
|
|
|
|
free(filename);
|
|
|
|
}
|
|
|
|
|
2013-09-14 16:32:59 +08:00
|
|
|
if (!symbol_conf.ignore_vmlinux && vmlinux_path != NULL) {
|
2016-09-02 06:25:52 +08:00
|
|
|
err = dso__load_vmlinux_path(dso, map);
|
2010-01-28 07:05:50 +08:00
|
|
|
if (err > 0)
|
2013-08-07 19:38:47 +08:00
|
|
|
return err;
|
2009-11-24 02:39:10 +08:00
|
|
|
}
|
|
|
|
|
2010-12-10 04:27:07 +08:00
|
|
|
/* do not try local files if a symfs was given */
|
|
|
|
if (symbol_conf.symfs[0] != 0)
|
|
|
|
return -1;
|
|
|
|
|
2013-10-14 18:43:43 +08:00
|
|
|
kallsyms_allocated_filename = dso__find_kallsyms(dso, map);
|
|
|
|
if (!kallsyms_allocated_filename)
|
|
|
|
return -1;
|
2010-01-23 00:35:02 +08:00
|
|
|
|
2013-10-14 18:43:43 +08:00
|
|
|
kallsyms_filename = kallsyms_allocated_filename;
|
2009-10-02 14:29:58 +08:00
|
|
|
|
2009-11-24 02:39:10 +08:00
|
|
|
do_kallsyms:
|
2016-09-02 06:25:52 +08:00
|
|
|
err = dso__load_kallsyms(dso, kallsyms_filename, map);
|
2010-02-23 03:15:39 +08:00
|
|
|
if (err > 0)
|
|
|
|
pr_debug("Using %s for symbols\n", kallsyms_filename);
|
2010-01-19 20:36:14 +08:00
|
|
|
free(kallsyms_allocated_filename);
|
2009-10-02 14:29:58 +08:00
|
|
|
|
2013-08-07 19:38:51 +08:00
|
|
|
if (err > 0 && !dso__is_kcore(dso)) {
|
2014-07-14 18:02:43 +08:00
|
|
|
dso->binary_type = DSO_BINARY_TYPE__KALLSYMS;
|
2016-05-15 11:19:40 +08:00
|
|
|
dso__set_long_name(dso, DSO__NAME_KALLSYMS, false);
|
2009-11-28 02:29:17 +08:00
|
|
|
map__fixup_start(map);
|
|
|
|
map__fixup_end(map);
|
2009-10-02 14:29:58 +08:00
|
|
|
}
|
2009-08-07 01:43:17 +08:00
|
|
|
|
2009-05-29 01:55:19 +08:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2016-09-02 06:25:52 +08:00
|
|
|
static int dso__load_guest_kernel_sym(struct dso *dso, struct map *map)
|
2010-04-19 13:32:50 +08:00
|
|
|
{
|
|
|
|
int err;
|
2022-07-11 17:32:05 +08:00
|
|
|
const char *kallsyms_filename;
|
2019-11-05 03:25:11 +08:00
|
|
|
struct machine *machine = map__kmaps(map)->machine;
|
2010-04-19 13:32:50 +08:00
|
|
|
char path[PATH_MAX];
|
|
|
|
|
2022-07-11 17:32:05 +08:00
|
|
|
if (machine->kallsyms_filename) {
|
|
|
|
kallsyms_filename = machine->kallsyms_filename;
|
|
|
|
} else if (machine__is_default_guest(machine)) {
|
2010-04-19 13:32:50 +08:00
|
|
|
/*
|
|
|
|
* if the user specified a vmlinux filename, use it and only
|
|
|
|
* it, reporting errors to the user if it cannot be used.
|
|
|
|
* Or use file guest_kallsyms inputted by user on commandline
|
|
|
|
*/
|
|
|
|
if (symbol_conf.default_guest_vmlinux_name != NULL) {
|
2011-03-31 21:56:28 +08:00
|
|
|
err = dso__load_vmlinux(dso, map,
|
2013-12-10 22:58:52 +08:00
|
|
|
symbol_conf.default_guest_vmlinux_name,
|
2016-09-02 06:25:52 +08:00
|
|
|
false);
|
2013-08-07 19:38:47 +08:00
|
|
|
return err;
|
2010-04-19 13:32:50 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
kallsyms_filename = symbol_conf.default_guest_kallsyms;
|
|
|
|
if (!kallsyms_filename)
|
|
|
|
return -1;
|
|
|
|
} else {
|
2010-04-28 08:17:50 +08:00
|
|
|
sprintf(path, "%s/proc/kallsyms", machine->root_dir);
|
2010-04-19 13:32:50 +08:00
|
|
|
kallsyms_filename = path;
|
|
|
|
}
|
|
|
|
|
2016-09-02 06:25:52 +08:00
|
|
|
err = dso__load_kallsyms(dso, kallsyms_filename, map);
|
2013-08-07 19:38:51 +08:00
|
|
|
if (err > 0)
|
2013-08-07 19:38:47 +08:00
|
|
|
pr_debug("Using %s for symbols\n", kallsyms_filename);
|
2013-08-07 19:38:51 +08:00
|
|
|
if (err > 0 && !dso__is_kcore(dso)) {
|
2014-07-14 18:02:43 +08:00
|
|
|
dso->binary_type = DSO_BINARY_TYPE__GUEST_KALLSYMS;
|
2018-02-15 20:26:30 +08:00
|
|
|
dso__set_long_name(dso, machine->mmap_name, false);
|
2010-04-19 13:32:50 +08:00
|
|
|
map__fixup_start(map);
|
|
|
|
map__fixup_end(map);
|
|
|
|
}
|
|
|
|
|
|
|
|
return err;
|
|
|
|
}
|
2009-08-12 16:03:49 +08:00
|
|
|
|
2009-11-24 02:39:10 +08:00
|
|
|
static void vmlinux_path__exit(void)
|
|
|
|
{
|
2013-12-27 04:41:15 +08:00
|
|
|
while (--vmlinux_path__nr_entries >= 0)
|
|
|
|
zfree(&vmlinux_path[vmlinux_path__nr_entries]);
|
2015-05-17 18:56:27 +08:00
|
|
|
vmlinux_path__nr_entries = 0;
|
2009-11-24 02:39:10 +08:00
|
|
|
|
2013-12-27 04:41:15 +08:00
|
|
|
zfree(&vmlinux_path);
|
2009-11-24 02:39:10 +08:00
|
|
|
}
|
|
|
|
|
2015-11-26 00:32:45 +08:00
|
|
|
static const char * const vmlinux_paths[] = {
|
|
|
|
"vmlinux",
|
|
|
|
"/boot/vmlinux"
|
|
|
|
};
|
|
|
|
|
|
|
|
static const char * const vmlinux_paths_upd[] = {
|
|
|
|
"/boot/vmlinux-%s",
|
|
|
|
"/usr/lib/debug/boot/vmlinux-%s",
|
|
|
|
"/lib/modules/%s/build/vmlinux",
|
2015-11-26 00:32:46 +08:00
|
|
|
"/usr/lib/debug/lib/modules/%s/vmlinux",
|
|
|
|
"/usr/lib/debug/boot/vmlinux-%s.debug"
|
2015-11-26 00:32:45 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
static int vmlinux_path__add(const char *new_entry)
|
|
|
|
{
|
|
|
|
vmlinux_path[vmlinux_path__nr_entries] = strdup(new_entry);
|
|
|
|
if (vmlinux_path[vmlinux_path__nr_entries] == NULL)
|
|
|
|
return -1;
|
|
|
|
++vmlinux_path__nr_entries;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2015-08-28 17:48:04 +08:00
|
|
|
static int vmlinux_path__init(struct perf_env *env)
|
2009-11-24 02:39:10 +08:00
|
|
|
{
|
|
|
|
struct utsname uts;
|
|
|
|
char bf[PATH_MAX];
|
2014-08-12 14:40:45 +08:00
|
|
|
char *kernel_version;
|
2015-11-26 00:32:45 +08:00
|
|
|
unsigned int i;
|
2009-11-24 02:39:10 +08:00
|
|
|
|
2015-11-26 00:32:45 +08:00
|
|
|
vmlinux_path = malloc(sizeof(char *) * (ARRAY_SIZE(vmlinux_paths) +
|
|
|
|
ARRAY_SIZE(vmlinux_paths_upd)));
|
2009-11-24 02:39:10 +08:00
|
|
|
if (vmlinux_path == NULL)
|
|
|
|
return -1;
|
|
|
|
|
2015-11-26 00:32:45 +08:00
|
|
|
for (i = 0; i < ARRAY_SIZE(vmlinux_paths); i++)
|
|
|
|
if (vmlinux_path__add(vmlinux_paths[i]) < 0)
|
|
|
|
goto out_fail;
|
2010-12-10 04:27:07 +08:00
|
|
|
|
2014-08-12 14:40:45 +08:00
|
|
|
/* only try kernel version if no symfs was given */
|
2010-12-10 04:27:07 +08:00
|
|
|
if (symbol_conf.symfs[0] != 0)
|
|
|
|
return 0;
|
|
|
|
|
2014-08-12 14:40:45 +08:00
|
|
|
if (env) {
|
|
|
|
kernel_version = env->os_release;
|
|
|
|
} else {
|
|
|
|
if (uname(&uts) < 0)
|
|
|
|
goto out_fail;
|
|
|
|
|
|
|
|
kernel_version = uts.release;
|
|
|
|
}
|
2010-12-10 04:27:07 +08:00
|
|
|
|
2015-11-26 00:32:45 +08:00
|
|
|
for (i = 0; i < ARRAY_SIZE(vmlinux_paths_upd); i++) {
|
|
|
|
snprintf(bf, sizeof(bf), vmlinux_paths_upd[i], kernel_version);
|
|
|
|
if (vmlinux_path__add(bf) < 0)
|
|
|
|
goto out_fail;
|
|
|
|
}
|
2009-11-24 02:39:10 +08:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
out_fail:
|
|
|
|
vmlinux_path__exit();
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2013-11-19 04:32:48 +08:00
|
|
|
int setup_list(struct strlist **list, const char *list_str,
|
2009-12-16 06:04:40 +08:00
|
|
|
const char *list_name)
|
|
|
|
{
|
|
|
|
if (list_str == NULL)
|
|
|
|
return 0;
|
|
|
|
|
2015-07-20 23:13:34 +08:00
|
|
|
*list = strlist__new(list_str, NULL);
|
2009-12-16 06:04:40 +08:00
|
|
|
if (!*list) {
|
|
|
|
pr_err("problems parsing %s list\n", list_name);
|
|
|
|
return -1;
|
|
|
|
}
|
perf symbols: Store if there is a filter in place
When setting yup the symbols library we setup several filter lists,
for dsos, comms, symbols, etc, and there is code that, if there are
filters, do certain operations, like recalculate the number of non
filtered histogram entries in the top/report TUI.
But they were considering just the "Zoom" filters, when they need to
take into account as well the above mentioned filters (perf top --comms,
--dsos, etc).
So store in symbol_conf.has_filter true if any of those filters is in
place.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-f5edfmhq69vfvs1kmikq1wep@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-07-13 19:21:57 +08:00
|
|
|
|
|
|
|
symbol_conf.has_filter = true;
|
2009-12-16 06:04:40 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2015-03-24 23:52:41 +08:00
|
|
|
int setup_intlist(struct intlist **list, const char *list_str,
|
|
|
|
const char *list_name)
|
|
|
|
{
|
|
|
|
if (list_str == NULL)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
*list = intlist__new(list_str);
|
|
|
|
if (!*list) {
|
|
|
|
pr_err("problems parsing %s list\n", list_name);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-02-07 16:09:35 +08:00
|
|
|
static int setup_addrlist(struct intlist **addr_list, struct strlist *sym_list)
|
|
|
|
{
|
|
|
|
struct str_node *pos, *tmp;
|
|
|
|
unsigned long val;
|
|
|
|
char *sep;
|
|
|
|
const char *end;
|
|
|
|
int i = 0, err;
|
|
|
|
|
|
|
|
*addr_list = intlist__new(NULL);
|
|
|
|
if (!*addr_list)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
strlist__for_each_entry_safe(pos, tmp, sym_list) {
|
|
|
|
errno = 0;
|
|
|
|
val = strtoul(pos->s, &sep, 16);
|
|
|
|
if (errno || (sep == pos->s))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (*sep != '\0') {
|
|
|
|
end = pos->s + strlen(pos->s) - 1;
|
|
|
|
while (end >= sep && isspace(*end))
|
|
|
|
end--;
|
|
|
|
|
|
|
|
if (end >= sep)
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
err = intlist__add(*addr_list, val);
|
|
|
|
if (err)
|
|
|
|
break;
|
|
|
|
|
|
|
|
strlist__remove(sym_list, pos);
|
|
|
|
i++;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (i == 0) {
|
|
|
|
intlist__delete(*addr_list);
|
|
|
|
*addr_list = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
perf symbols: Handle /proc/sys/kernel/kptr_restrict
Perf uses /proc/modules to figure out where kernel modules are loaded.
With the advent of kptr_restrict, non root users get zeroes for all module
start addresses.
So check if kptr_restrict is non zero and don't generate the syntethic
PERF_RECORD_MMAP events for them.
Warn the user about it in perf record and in perf report.
In perf report the reference relocation symbol being zero means that
kptr_restrict was set, thus /proc/kallsyms has only zeroed addresses, so don't
use it to fixup symbol addresses when using a valid kallsyms (in the buildid
cache) or vmlinux (in the vmlinux path) build-id located automatically or
specified by the user.
Provide an explanation about it in 'perf report' if kernel samples were taken,
checking if a suitable vmlinux or kallsyms was found/specified.
Restricted /proc/kallsyms don't go to the buildid cache anymore.
Example:
[acme@emilia ~]$ perf record -F 100000 sleep 1
WARNING: Kernel address maps (/proc/{kallsyms,modules}) are restricted, check
/proc/sys/kernel/kptr_restrict.
Samples in kernel functions may not be resolved if a suitable vmlinux file is
not found in the buildid cache or in the vmlinux path.
Samples in kernel modules won't be resolved at all.
If some relocation was applied (e.g. kexec) symbols may be misresolved even
with a suitable vmlinux or kallsyms file.
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.005 MB perf.data (~231 samples) ]
[acme@emilia ~]$
[acme@emilia ~]$ perf report --stdio
Kernel address maps (/proc/{kallsyms,modules}) were restricted,
check /proc/sys/kernel/kptr_restrict before running 'perf record'.
If some relocation was applied (e.g. kexec) symbols may be misresolved.
Samples in kernel modules can't be resolved as well.
# Events: 13 cycles
#
# Overhead Command Shared Object Symbol
# ........ ....... ................. .....................
#
20.24% sleep [kernel.kallsyms] [k] page_fault
20.04% sleep [kernel.kallsyms] [k] filemap_fault
19.78% sleep [kernel.kallsyms] [k] __lru_cache_add
19.69% sleep ld-2.12.so [.] memcpy
14.71% sleep [kernel.kallsyms] [k] dput
4.70% sleep [kernel.kallsyms] [k] flush_signal_handlers
0.73% sleep [kernel.kallsyms] [k] perf_event_comm
0.11% sleep [kernel.kallsyms] [k] native_write_msr_safe
#
# (For a higher level overview, try: perf report --sort comm,dso)
#
[acme@emilia ~]$
This is because it found a suitable vmlinux (build-id checked) in
/lib/modules/2.6.39-rc7+/build/vmlinux (use -v in perf report to see the long
file name).
If we remove that file from the vmlinux path:
[root@emilia ~]# mv /lib/modules/2.6.39-rc7+/build/vmlinux \
/lib/modules/2.6.39-rc7+/build/vmlinux.OFF
[acme@emilia ~]$ perf report --stdio
[kernel.kallsyms] with build id 57298cdbe0131f6871667ec0eaab4804dcf6f562
not found, continuing without symbols
Kernel address maps (/proc/{kallsyms,modules}) were restricted, check
/proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples can't be
resolved.
Samples in kernel modules can't be resolved as well.
# Events: 13 cycles
#
# Overhead Command Shared Object Symbol
# ........ ....... ................. ......
#
80.31% sleep [kernel.kallsyms] [k] 0xffffffff8103425a
19.69% sleep ld-2.12.so [.] memcpy
#
# (For a higher level overview, try: perf report --sort comm,dso)
#
[acme@emilia ~]$
Reported-by: Stephane Eranian <eranian@google.com>
Suggested-by: David Miller <davem@davemloft.net>
Cc: Dave Jones <davej@redhat.com>
Cc: David Miller <davem@davemloft.net>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Kees Cook <kees.cook@canonical.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Tom Zanussi <tzanussi@gmail.com>
Link: http://lkml.kernel.org/n/tip-mt512joaxxbhhp1odop04yit@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2011-05-26 20:53:51 +08:00
|
|
|
static bool symbol__read_kptr_restrict(void)
|
|
|
|
{
|
|
|
|
bool value = false;
|
2016-05-24 17:21:27 +08:00
|
|
|
FILE *fp = fopen("/proc/sys/kernel/kptr_restrict", "r");
|
perf symbols: Handle /proc/sys/kernel/kptr_restrict
Perf uses /proc/modules to figure out where kernel modules are loaded.
With the advent of kptr_restrict, non root users get zeroes for all module
start addresses.
So check if kptr_restrict is non zero and don't generate the syntethic
PERF_RECORD_MMAP events for them.
Warn the user about it in perf record and in perf report.
In perf report the reference relocation symbol being zero means that
kptr_restrict was set, thus /proc/kallsyms has only zeroed addresses, so don't
use it to fixup symbol addresses when using a valid kallsyms (in the buildid
cache) or vmlinux (in the vmlinux path) build-id located automatically or
specified by the user.
Provide an explanation about it in 'perf report' if kernel samples were taken,
checking if a suitable vmlinux or kallsyms was found/specified.
Restricted /proc/kallsyms don't go to the buildid cache anymore.
Example:
[acme@emilia ~]$ perf record -F 100000 sleep 1
WARNING: Kernel address maps (/proc/{kallsyms,modules}) are restricted, check
/proc/sys/kernel/kptr_restrict.
Samples in kernel functions may not be resolved if a suitable vmlinux file is
not found in the buildid cache or in the vmlinux path.
Samples in kernel modules won't be resolved at all.
If some relocation was applied (e.g. kexec) symbols may be misresolved even
with a suitable vmlinux or kallsyms file.
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.005 MB perf.data (~231 samples) ]
[acme@emilia ~]$
[acme@emilia ~]$ perf report --stdio
Kernel address maps (/proc/{kallsyms,modules}) were restricted,
check /proc/sys/kernel/kptr_restrict before running 'perf record'.
If some relocation was applied (e.g. kexec) symbols may be misresolved.
Samples in kernel modules can't be resolved as well.
# Events: 13 cycles
#
# Overhead Command Shared Object Symbol
# ........ ....... ................. .....................
#
20.24% sleep [kernel.kallsyms] [k] page_fault
20.04% sleep [kernel.kallsyms] [k] filemap_fault
19.78% sleep [kernel.kallsyms] [k] __lru_cache_add
19.69% sleep ld-2.12.so [.] memcpy
14.71% sleep [kernel.kallsyms] [k] dput
4.70% sleep [kernel.kallsyms] [k] flush_signal_handlers
0.73% sleep [kernel.kallsyms] [k] perf_event_comm
0.11% sleep [kernel.kallsyms] [k] native_write_msr_safe
#
# (For a higher level overview, try: perf report --sort comm,dso)
#
[acme@emilia ~]$
This is because it found a suitable vmlinux (build-id checked) in
/lib/modules/2.6.39-rc7+/build/vmlinux (use -v in perf report to see the long
file name).
If we remove that file from the vmlinux path:
[root@emilia ~]# mv /lib/modules/2.6.39-rc7+/build/vmlinux \
/lib/modules/2.6.39-rc7+/build/vmlinux.OFF
[acme@emilia ~]$ perf report --stdio
[kernel.kallsyms] with build id 57298cdbe0131f6871667ec0eaab4804dcf6f562
not found, continuing without symbols
Kernel address maps (/proc/{kallsyms,modules}) were restricted, check
/proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples can't be
resolved.
Samples in kernel modules can't be resolved as well.
# Events: 13 cycles
#
# Overhead Command Shared Object Symbol
# ........ ....... ................. ......
#
80.31% sleep [kernel.kallsyms] [k] 0xffffffff8103425a
19.69% sleep ld-2.12.so [.] memcpy
#
# (For a higher level overview, try: perf report --sort comm,dso)
#
[acme@emilia ~]$
Reported-by: Stephane Eranian <eranian@google.com>
Suggested-by: David Miller <davem@davemloft.net>
Cc: Dave Jones <davej@redhat.com>
Cc: David Miller <davem@davemloft.net>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Kees Cook <kees.cook@canonical.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Tom Zanussi <tzanussi@gmail.com>
Link: http://lkml.kernel.org/n/tip-mt512joaxxbhhp1odop04yit@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2011-05-26 20:53:51 +08:00
|
|
|
|
2016-05-24 17:21:27 +08:00
|
|
|
if (fp != NULL) {
|
|
|
|
char line[8];
|
perf symbols: Handle /proc/sys/kernel/kptr_restrict
Perf uses /proc/modules to figure out where kernel modules are loaded.
With the advent of kptr_restrict, non root users get zeroes for all module
start addresses.
So check if kptr_restrict is non zero and don't generate the syntethic
PERF_RECORD_MMAP events for them.
Warn the user about it in perf record and in perf report.
In perf report the reference relocation symbol being zero means that
kptr_restrict was set, thus /proc/kallsyms has only zeroed addresses, so don't
use it to fixup symbol addresses when using a valid kallsyms (in the buildid
cache) or vmlinux (in the vmlinux path) build-id located automatically or
specified by the user.
Provide an explanation about it in 'perf report' if kernel samples were taken,
checking if a suitable vmlinux or kallsyms was found/specified.
Restricted /proc/kallsyms don't go to the buildid cache anymore.
Example:
[acme@emilia ~]$ perf record -F 100000 sleep 1
WARNING: Kernel address maps (/proc/{kallsyms,modules}) are restricted, check
/proc/sys/kernel/kptr_restrict.
Samples in kernel functions may not be resolved if a suitable vmlinux file is
not found in the buildid cache or in the vmlinux path.
Samples in kernel modules won't be resolved at all.
If some relocation was applied (e.g. kexec) symbols may be misresolved even
with a suitable vmlinux or kallsyms file.
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.005 MB perf.data (~231 samples) ]
[acme@emilia ~]$
[acme@emilia ~]$ perf report --stdio
Kernel address maps (/proc/{kallsyms,modules}) were restricted,
check /proc/sys/kernel/kptr_restrict before running 'perf record'.
If some relocation was applied (e.g. kexec) symbols may be misresolved.
Samples in kernel modules can't be resolved as well.
# Events: 13 cycles
#
# Overhead Command Shared Object Symbol
# ........ ....... ................. .....................
#
20.24% sleep [kernel.kallsyms] [k] page_fault
20.04% sleep [kernel.kallsyms] [k] filemap_fault
19.78% sleep [kernel.kallsyms] [k] __lru_cache_add
19.69% sleep ld-2.12.so [.] memcpy
14.71% sleep [kernel.kallsyms] [k] dput
4.70% sleep [kernel.kallsyms] [k] flush_signal_handlers
0.73% sleep [kernel.kallsyms] [k] perf_event_comm
0.11% sleep [kernel.kallsyms] [k] native_write_msr_safe
#
# (For a higher level overview, try: perf report --sort comm,dso)
#
[acme@emilia ~]$
This is because it found a suitable vmlinux (build-id checked) in
/lib/modules/2.6.39-rc7+/build/vmlinux (use -v in perf report to see the long
file name).
If we remove that file from the vmlinux path:
[root@emilia ~]# mv /lib/modules/2.6.39-rc7+/build/vmlinux \
/lib/modules/2.6.39-rc7+/build/vmlinux.OFF
[acme@emilia ~]$ perf report --stdio
[kernel.kallsyms] with build id 57298cdbe0131f6871667ec0eaab4804dcf6f562
not found, continuing without symbols
Kernel address maps (/proc/{kallsyms,modules}) were restricted, check
/proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples can't be
resolved.
Samples in kernel modules can't be resolved as well.
# Events: 13 cycles
#
# Overhead Command Shared Object Symbol
# ........ ....... ................. ......
#
80.31% sleep [kernel.kallsyms] [k] 0xffffffff8103425a
19.69% sleep ld-2.12.so [.] memcpy
#
# (For a higher level overview, try: perf report --sort comm,dso)
#
[acme@emilia ~]$
Reported-by: Stephane Eranian <eranian@google.com>
Suggested-by: David Miller <davem@davemloft.net>
Cc: Dave Jones <davej@redhat.com>
Cc: David Miller <davem@davemloft.net>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Kees Cook <kees.cook@canonical.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Tom Zanussi <tzanussi@gmail.com>
Link: http://lkml.kernel.org/n/tip-mt512joaxxbhhp1odop04yit@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2011-05-26 20:53:51 +08:00
|
|
|
|
2016-05-24 17:21:27 +08:00
|
|
|
if (fgets(line, sizeof(line), fp) != NULL)
|
2019-08-27 09:39:15 +08:00
|
|
|
value = perf_cap__capable(CAP_SYSLOG) ?
|
|
|
|
(atoi(line) >= 2) :
|
|
|
|
(atoi(line) != 0);
|
perf symbols: Handle /proc/sys/kernel/kptr_restrict
Perf uses /proc/modules to figure out where kernel modules are loaded.
With the advent of kptr_restrict, non root users get zeroes for all module
start addresses.
So check if kptr_restrict is non zero and don't generate the syntethic
PERF_RECORD_MMAP events for them.
Warn the user about it in perf record and in perf report.
In perf report the reference relocation symbol being zero means that
kptr_restrict was set, thus /proc/kallsyms has only zeroed addresses, so don't
use it to fixup symbol addresses when using a valid kallsyms (in the buildid
cache) or vmlinux (in the vmlinux path) build-id located automatically or
specified by the user.
Provide an explanation about it in 'perf report' if kernel samples were taken,
checking if a suitable vmlinux or kallsyms was found/specified.
Restricted /proc/kallsyms don't go to the buildid cache anymore.
Example:
[acme@emilia ~]$ perf record -F 100000 sleep 1
WARNING: Kernel address maps (/proc/{kallsyms,modules}) are restricted, check
/proc/sys/kernel/kptr_restrict.
Samples in kernel functions may not be resolved if a suitable vmlinux file is
not found in the buildid cache or in the vmlinux path.
Samples in kernel modules won't be resolved at all.
If some relocation was applied (e.g. kexec) symbols may be misresolved even
with a suitable vmlinux or kallsyms file.
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.005 MB perf.data (~231 samples) ]
[acme@emilia ~]$
[acme@emilia ~]$ perf report --stdio
Kernel address maps (/proc/{kallsyms,modules}) were restricted,
check /proc/sys/kernel/kptr_restrict before running 'perf record'.
If some relocation was applied (e.g. kexec) symbols may be misresolved.
Samples in kernel modules can't be resolved as well.
# Events: 13 cycles
#
# Overhead Command Shared Object Symbol
# ........ ....... ................. .....................
#
20.24% sleep [kernel.kallsyms] [k] page_fault
20.04% sleep [kernel.kallsyms] [k] filemap_fault
19.78% sleep [kernel.kallsyms] [k] __lru_cache_add
19.69% sleep ld-2.12.so [.] memcpy
14.71% sleep [kernel.kallsyms] [k] dput
4.70% sleep [kernel.kallsyms] [k] flush_signal_handlers
0.73% sleep [kernel.kallsyms] [k] perf_event_comm
0.11% sleep [kernel.kallsyms] [k] native_write_msr_safe
#
# (For a higher level overview, try: perf report --sort comm,dso)
#
[acme@emilia ~]$
This is because it found a suitable vmlinux (build-id checked) in
/lib/modules/2.6.39-rc7+/build/vmlinux (use -v in perf report to see the long
file name).
If we remove that file from the vmlinux path:
[root@emilia ~]# mv /lib/modules/2.6.39-rc7+/build/vmlinux \
/lib/modules/2.6.39-rc7+/build/vmlinux.OFF
[acme@emilia ~]$ perf report --stdio
[kernel.kallsyms] with build id 57298cdbe0131f6871667ec0eaab4804dcf6f562
not found, continuing without symbols
Kernel address maps (/proc/{kallsyms,modules}) were restricted, check
/proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples can't be
resolved.
Samples in kernel modules can't be resolved as well.
# Events: 13 cycles
#
# Overhead Command Shared Object Symbol
# ........ ....... ................. ......
#
80.31% sleep [kernel.kallsyms] [k] 0xffffffff8103425a
19.69% sleep ld-2.12.so [.] memcpy
#
# (For a higher level overview, try: perf report --sort comm,dso)
#
[acme@emilia ~]$
Reported-by: Stephane Eranian <eranian@google.com>
Suggested-by: David Miller <davem@davemloft.net>
Cc: Dave Jones <davej@redhat.com>
Cc: David Miller <davem@davemloft.net>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Kees Cook <kees.cook@canonical.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Tom Zanussi <tzanussi@gmail.com>
Link: http://lkml.kernel.org/n/tip-mt512joaxxbhhp1odop04yit@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2011-05-26 20:53:51 +08:00
|
|
|
|
2016-05-24 17:21:27 +08:00
|
|
|
fclose(fp);
|
perf symbols: Handle /proc/sys/kernel/kptr_restrict
Perf uses /proc/modules to figure out where kernel modules are loaded.
With the advent of kptr_restrict, non root users get zeroes for all module
start addresses.
So check if kptr_restrict is non zero and don't generate the syntethic
PERF_RECORD_MMAP events for them.
Warn the user about it in perf record and in perf report.
In perf report the reference relocation symbol being zero means that
kptr_restrict was set, thus /proc/kallsyms has only zeroed addresses, so don't
use it to fixup symbol addresses when using a valid kallsyms (in the buildid
cache) or vmlinux (in the vmlinux path) build-id located automatically or
specified by the user.
Provide an explanation about it in 'perf report' if kernel samples were taken,
checking if a suitable vmlinux or kallsyms was found/specified.
Restricted /proc/kallsyms don't go to the buildid cache anymore.
Example:
[acme@emilia ~]$ perf record -F 100000 sleep 1
WARNING: Kernel address maps (/proc/{kallsyms,modules}) are restricted, check
/proc/sys/kernel/kptr_restrict.
Samples in kernel functions may not be resolved if a suitable vmlinux file is
not found in the buildid cache or in the vmlinux path.
Samples in kernel modules won't be resolved at all.
If some relocation was applied (e.g. kexec) symbols may be misresolved even
with a suitable vmlinux or kallsyms file.
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.005 MB perf.data (~231 samples) ]
[acme@emilia ~]$
[acme@emilia ~]$ perf report --stdio
Kernel address maps (/proc/{kallsyms,modules}) were restricted,
check /proc/sys/kernel/kptr_restrict before running 'perf record'.
If some relocation was applied (e.g. kexec) symbols may be misresolved.
Samples in kernel modules can't be resolved as well.
# Events: 13 cycles
#
# Overhead Command Shared Object Symbol
# ........ ....... ................. .....................
#
20.24% sleep [kernel.kallsyms] [k] page_fault
20.04% sleep [kernel.kallsyms] [k] filemap_fault
19.78% sleep [kernel.kallsyms] [k] __lru_cache_add
19.69% sleep ld-2.12.so [.] memcpy
14.71% sleep [kernel.kallsyms] [k] dput
4.70% sleep [kernel.kallsyms] [k] flush_signal_handlers
0.73% sleep [kernel.kallsyms] [k] perf_event_comm
0.11% sleep [kernel.kallsyms] [k] native_write_msr_safe
#
# (For a higher level overview, try: perf report --sort comm,dso)
#
[acme@emilia ~]$
This is because it found a suitable vmlinux (build-id checked) in
/lib/modules/2.6.39-rc7+/build/vmlinux (use -v in perf report to see the long
file name).
If we remove that file from the vmlinux path:
[root@emilia ~]# mv /lib/modules/2.6.39-rc7+/build/vmlinux \
/lib/modules/2.6.39-rc7+/build/vmlinux.OFF
[acme@emilia ~]$ perf report --stdio
[kernel.kallsyms] with build id 57298cdbe0131f6871667ec0eaab4804dcf6f562
not found, continuing without symbols
Kernel address maps (/proc/{kallsyms,modules}) were restricted, check
/proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples can't be
resolved.
Samples in kernel modules can't be resolved as well.
# Events: 13 cycles
#
# Overhead Command Shared Object Symbol
# ........ ....... ................. ......
#
80.31% sleep [kernel.kallsyms] [k] 0xffffffff8103425a
19.69% sleep ld-2.12.so [.] memcpy
#
# (For a higher level overview, try: perf report --sort comm,dso)
#
[acme@emilia ~]$
Reported-by: Stephane Eranian <eranian@google.com>
Suggested-by: David Miller <davem@davemloft.net>
Cc: Dave Jones <davej@redhat.com>
Cc: David Miller <davem@davemloft.net>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Kees Cook <kees.cook@canonical.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Tom Zanussi <tzanussi@gmail.com>
Link: http://lkml.kernel.org/n/tip-mt512joaxxbhhp1odop04yit@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2011-05-26 20:53:51 +08:00
|
|
|
}
|
|
|
|
|
2019-08-27 09:39:15 +08:00
|
|
|
/* Per kernel/kallsyms.c:
|
|
|
|
* we also restrict when perf_event_paranoid > 1 w/o CAP_SYSLOG
|
|
|
|
*/
|
|
|
|
if (perf_event_paranoid() > 1 && !perf_cap__capable(CAP_SYSLOG))
|
|
|
|
value = true;
|
|
|
|
|
perf symbols: Handle /proc/sys/kernel/kptr_restrict
Perf uses /proc/modules to figure out where kernel modules are loaded.
With the advent of kptr_restrict, non root users get zeroes for all module
start addresses.
So check if kptr_restrict is non zero and don't generate the syntethic
PERF_RECORD_MMAP events for them.
Warn the user about it in perf record and in perf report.
In perf report the reference relocation symbol being zero means that
kptr_restrict was set, thus /proc/kallsyms has only zeroed addresses, so don't
use it to fixup symbol addresses when using a valid kallsyms (in the buildid
cache) or vmlinux (in the vmlinux path) build-id located automatically or
specified by the user.
Provide an explanation about it in 'perf report' if kernel samples were taken,
checking if a suitable vmlinux or kallsyms was found/specified.
Restricted /proc/kallsyms don't go to the buildid cache anymore.
Example:
[acme@emilia ~]$ perf record -F 100000 sleep 1
WARNING: Kernel address maps (/proc/{kallsyms,modules}) are restricted, check
/proc/sys/kernel/kptr_restrict.
Samples in kernel functions may not be resolved if a suitable vmlinux file is
not found in the buildid cache or in the vmlinux path.
Samples in kernel modules won't be resolved at all.
If some relocation was applied (e.g. kexec) symbols may be misresolved even
with a suitable vmlinux or kallsyms file.
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.005 MB perf.data (~231 samples) ]
[acme@emilia ~]$
[acme@emilia ~]$ perf report --stdio
Kernel address maps (/proc/{kallsyms,modules}) were restricted,
check /proc/sys/kernel/kptr_restrict before running 'perf record'.
If some relocation was applied (e.g. kexec) symbols may be misresolved.
Samples in kernel modules can't be resolved as well.
# Events: 13 cycles
#
# Overhead Command Shared Object Symbol
# ........ ....... ................. .....................
#
20.24% sleep [kernel.kallsyms] [k] page_fault
20.04% sleep [kernel.kallsyms] [k] filemap_fault
19.78% sleep [kernel.kallsyms] [k] __lru_cache_add
19.69% sleep ld-2.12.so [.] memcpy
14.71% sleep [kernel.kallsyms] [k] dput
4.70% sleep [kernel.kallsyms] [k] flush_signal_handlers
0.73% sleep [kernel.kallsyms] [k] perf_event_comm
0.11% sleep [kernel.kallsyms] [k] native_write_msr_safe
#
# (For a higher level overview, try: perf report --sort comm,dso)
#
[acme@emilia ~]$
This is because it found a suitable vmlinux (build-id checked) in
/lib/modules/2.6.39-rc7+/build/vmlinux (use -v in perf report to see the long
file name).
If we remove that file from the vmlinux path:
[root@emilia ~]# mv /lib/modules/2.6.39-rc7+/build/vmlinux \
/lib/modules/2.6.39-rc7+/build/vmlinux.OFF
[acme@emilia ~]$ perf report --stdio
[kernel.kallsyms] with build id 57298cdbe0131f6871667ec0eaab4804dcf6f562
not found, continuing without symbols
Kernel address maps (/proc/{kallsyms,modules}) were restricted, check
/proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples can't be
resolved.
Samples in kernel modules can't be resolved as well.
# Events: 13 cycles
#
# Overhead Command Shared Object Symbol
# ........ ....... ................. ......
#
80.31% sleep [kernel.kallsyms] [k] 0xffffffff8103425a
19.69% sleep ld-2.12.so [.] memcpy
#
# (For a higher level overview, try: perf report --sort comm,dso)
#
[acme@emilia ~]$
Reported-by: Stephane Eranian <eranian@google.com>
Suggested-by: David Miller <davem@davemloft.net>
Cc: Dave Jones <davej@redhat.com>
Cc: David Miller <davem@davemloft.net>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Kees Cook <kees.cook@canonical.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Tom Zanussi <tzanussi@gmail.com>
Link: http://lkml.kernel.org/n/tip-mt512joaxxbhhp1odop04yit@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2011-05-26 20:53:51 +08:00
|
|
|
return value;
|
|
|
|
}
|
|
|
|
|
2016-08-26 03:09:21 +08:00
|
|
|
int symbol__annotation_init(void)
|
|
|
|
{
|
2018-04-13 01:58:24 +08:00
|
|
|
if (symbol_conf.init_annotation)
|
|
|
|
return 0;
|
|
|
|
|
2016-08-26 03:09:21 +08:00
|
|
|
if (symbol_conf.initialized) {
|
|
|
|
pr_err("Annotation needs to be init before symbol__init()\n");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
symbol_conf.priv_size += sizeof(struct annotation);
|
|
|
|
symbol_conf.init_annotation = true;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2015-08-28 17:48:04 +08:00
|
|
|
int symbol__init(struct perf_env *env)
|
2009-11-19 06:20:53 +08:00
|
|
|
{
|
2010-12-10 04:27:07 +08:00
|
|
|
const char *symfs;
|
|
|
|
|
2010-09-10 00:30:59 +08:00
|
|
|
if (symbol_conf.initialized)
|
|
|
|
return 0;
|
|
|
|
|
2012-09-11 06:15:01 +08:00
|
|
|
symbol_conf.priv_size = PERF_ALIGN(symbol_conf.priv_size, sizeof(u64));
|
2011-03-30 01:18:39 +08:00
|
|
|
|
2012-08-06 12:41:19 +08:00
|
|
|
symbol__elf_init();
|
|
|
|
|
2009-12-16 06:04:39 +08:00
|
|
|
if (symbol_conf.sort_by_name)
|
|
|
|
symbol_conf.priv_size += (sizeof(struct symbol_name_rb_node) -
|
|
|
|
sizeof(struct symbol));
|
2009-11-24 22:05:15 +08:00
|
|
|
|
2014-08-12 14:40:45 +08:00
|
|
|
if (symbol_conf.try_vmlinux_path && vmlinux_path__init(env) < 0)
|
2009-11-19 06:20:53 +08:00
|
|
|
return -1;
|
|
|
|
|
2009-12-16 06:04:41 +08:00
|
|
|
if (symbol_conf.field_sep && *symbol_conf.field_sep == '.') {
|
|
|
|
pr_err("'.' is the only non valid --field-separator argument\n");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2009-12-16 06:04:40 +08:00
|
|
|
if (setup_list(&symbol_conf.dso_list,
|
|
|
|
symbol_conf.dso_list_str, "dso") < 0)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
if (setup_list(&symbol_conf.comm_list,
|
|
|
|
symbol_conf.comm_list_str, "comm") < 0)
|
|
|
|
goto out_free_dso_list;
|
|
|
|
|
2015-03-24 23:52:41 +08:00
|
|
|
if (setup_intlist(&symbol_conf.pid_list,
|
|
|
|
symbol_conf.pid_list_str, "pid") < 0)
|
|
|
|
goto out_free_comm_list;
|
|
|
|
|
|
|
|
if (setup_intlist(&symbol_conf.tid_list,
|
|
|
|
symbol_conf.tid_list_str, "tid") < 0)
|
|
|
|
goto out_free_pid_list;
|
|
|
|
|
2009-12-16 06:04:40 +08:00
|
|
|
if (setup_list(&symbol_conf.sym_list,
|
|
|
|
symbol_conf.sym_list_str, "symbol") < 0)
|
2015-03-24 23:52:41 +08:00
|
|
|
goto out_free_tid_list;
|
2009-12-16 06:04:40 +08:00
|
|
|
|
2021-02-07 16:09:35 +08:00
|
|
|
if (symbol_conf.sym_list &&
|
|
|
|
setup_addrlist(&symbol_conf.addr_list, symbol_conf.sym_list) < 0)
|
|
|
|
goto out_free_sym_list;
|
|
|
|
|
2016-11-26 04:00:21 +08:00
|
|
|
if (setup_list(&symbol_conf.bt_stop_list,
|
|
|
|
symbol_conf.bt_stop_list_str, "symbol") < 0)
|
|
|
|
goto out_free_sym_list;
|
|
|
|
|
2010-12-10 04:27:07 +08:00
|
|
|
/*
|
|
|
|
* A path to symbols of "/" is identical to ""
|
|
|
|
* reset here for simplicity.
|
|
|
|
*/
|
|
|
|
symfs = realpath(symbol_conf.symfs, NULL);
|
|
|
|
if (symfs == NULL)
|
|
|
|
symfs = symbol_conf.symfs;
|
|
|
|
if (strcmp(symfs, "/") == 0)
|
|
|
|
symbol_conf.symfs = "";
|
|
|
|
if (symfs != symbol_conf.symfs)
|
|
|
|
free((void *)symfs);
|
|
|
|
|
perf symbols: Handle /proc/sys/kernel/kptr_restrict
Perf uses /proc/modules to figure out where kernel modules are loaded.
With the advent of kptr_restrict, non root users get zeroes for all module
start addresses.
So check if kptr_restrict is non zero and don't generate the syntethic
PERF_RECORD_MMAP events for them.
Warn the user about it in perf record and in perf report.
In perf report the reference relocation symbol being zero means that
kptr_restrict was set, thus /proc/kallsyms has only zeroed addresses, so don't
use it to fixup symbol addresses when using a valid kallsyms (in the buildid
cache) or vmlinux (in the vmlinux path) build-id located automatically or
specified by the user.
Provide an explanation about it in 'perf report' if kernel samples were taken,
checking if a suitable vmlinux or kallsyms was found/specified.
Restricted /proc/kallsyms don't go to the buildid cache anymore.
Example:
[acme@emilia ~]$ perf record -F 100000 sleep 1
WARNING: Kernel address maps (/proc/{kallsyms,modules}) are restricted, check
/proc/sys/kernel/kptr_restrict.
Samples in kernel functions may not be resolved if a suitable vmlinux file is
not found in the buildid cache or in the vmlinux path.
Samples in kernel modules won't be resolved at all.
If some relocation was applied (e.g. kexec) symbols may be misresolved even
with a suitable vmlinux or kallsyms file.
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.005 MB perf.data (~231 samples) ]
[acme@emilia ~]$
[acme@emilia ~]$ perf report --stdio
Kernel address maps (/proc/{kallsyms,modules}) were restricted,
check /proc/sys/kernel/kptr_restrict before running 'perf record'.
If some relocation was applied (e.g. kexec) symbols may be misresolved.
Samples in kernel modules can't be resolved as well.
# Events: 13 cycles
#
# Overhead Command Shared Object Symbol
# ........ ....... ................. .....................
#
20.24% sleep [kernel.kallsyms] [k] page_fault
20.04% sleep [kernel.kallsyms] [k] filemap_fault
19.78% sleep [kernel.kallsyms] [k] __lru_cache_add
19.69% sleep ld-2.12.so [.] memcpy
14.71% sleep [kernel.kallsyms] [k] dput
4.70% sleep [kernel.kallsyms] [k] flush_signal_handlers
0.73% sleep [kernel.kallsyms] [k] perf_event_comm
0.11% sleep [kernel.kallsyms] [k] native_write_msr_safe
#
# (For a higher level overview, try: perf report --sort comm,dso)
#
[acme@emilia ~]$
This is because it found a suitable vmlinux (build-id checked) in
/lib/modules/2.6.39-rc7+/build/vmlinux (use -v in perf report to see the long
file name).
If we remove that file from the vmlinux path:
[root@emilia ~]# mv /lib/modules/2.6.39-rc7+/build/vmlinux \
/lib/modules/2.6.39-rc7+/build/vmlinux.OFF
[acme@emilia ~]$ perf report --stdio
[kernel.kallsyms] with build id 57298cdbe0131f6871667ec0eaab4804dcf6f562
not found, continuing without symbols
Kernel address maps (/proc/{kallsyms,modules}) were restricted, check
/proc/sys/kernel/kptr_restrict before running 'perf record'.
As no suitable kallsyms nor vmlinux was found, kernel samples can't be
resolved.
Samples in kernel modules can't be resolved as well.
# Events: 13 cycles
#
# Overhead Command Shared Object Symbol
# ........ ....... ................. ......
#
80.31% sleep [kernel.kallsyms] [k] 0xffffffff8103425a
19.69% sleep ld-2.12.so [.] memcpy
#
# (For a higher level overview, try: perf report --sort comm,dso)
#
[acme@emilia ~]$
Reported-by: Stephane Eranian <eranian@google.com>
Suggested-by: David Miller <davem@davemloft.net>
Cc: Dave Jones <davej@redhat.com>
Cc: David Miller <davem@davemloft.net>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Kees Cook <kees.cook@canonical.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Tom Zanussi <tzanussi@gmail.com>
Link: http://lkml.kernel.org/n/tip-mt512joaxxbhhp1odop04yit@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2011-05-26 20:53:51 +08:00
|
|
|
symbol_conf.kptr_restrict = symbol__read_kptr_restrict();
|
|
|
|
|
2010-09-10 00:30:59 +08:00
|
|
|
symbol_conf.initialized = true;
|
perf session: Move kmaps to perf_session
There is still some more work to do to disentangle map creation
from DSO loading, but this happens only for the kernel, and for
the early adopters of perf diff, where this disentanglement
matters most, we'll be testing different kernels, so no problem
here.
Further clarification: right now we create the kernel maps for
the various modules and discontiguous kernel text maps when
loading the DSO, we should do it as a two step process, first
creating the maps, for multiple mappings with the same DSO
store, then doing the dso load just once, for the first hit on
one of the maps sharing this DSO backing store.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1260741029-4430-6-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-12-14 05:50:29 +08:00
|
|
|
return 0;
|
2009-12-16 06:04:40 +08:00
|
|
|
|
2016-11-26 04:00:21 +08:00
|
|
|
out_free_sym_list:
|
|
|
|
strlist__delete(symbol_conf.sym_list);
|
2021-02-07 16:09:35 +08:00
|
|
|
intlist__delete(symbol_conf.addr_list);
|
2015-03-24 23:52:41 +08:00
|
|
|
out_free_tid_list:
|
|
|
|
intlist__delete(symbol_conf.tid_list);
|
|
|
|
out_free_pid_list:
|
|
|
|
intlist__delete(symbol_conf.pid_list);
|
2009-12-16 06:04:40 +08:00
|
|
|
out_free_comm_list:
|
|
|
|
strlist__delete(symbol_conf.comm_list);
|
2011-12-12 23:16:52 +08:00
|
|
|
out_free_dso_list:
|
|
|
|
strlist__delete(symbol_conf.dso_list);
|
2009-12-16 06:04:40 +08:00
|
|
|
return -1;
|
perf session: Move kmaps to perf_session
There is still some more work to do to disentangle map creation
from DSO loading, but this happens only for the kernel, and for
the early adopters of perf diff, where this disentanglement
matters most, we'll be testing different kernels, so no problem
here.
Further clarification: right now we create the kernel maps for
the various modules and discontiguous kernel text maps when
loading the DSO, we should do it as a two step process, first
creating the maps, for multiple mappings with the same DSO
store, then doing the dso load just once, for the first hit on
one of the maps sharing this DSO backing store.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1260741029-4430-6-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-12-14 05:50:29 +08:00
|
|
|
}
|
|
|
|
|
2010-07-31 05:31:28 +08:00
|
|
|
void symbol__exit(void)
|
|
|
|
{
|
2010-09-10 00:30:59 +08:00
|
|
|
if (!symbol_conf.initialized)
|
|
|
|
return;
|
2016-11-26 04:00:21 +08:00
|
|
|
strlist__delete(symbol_conf.bt_stop_list);
|
2010-07-31 05:31:28 +08:00
|
|
|
strlist__delete(symbol_conf.sym_list);
|
|
|
|
strlist__delete(symbol_conf.dso_list);
|
|
|
|
strlist__delete(symbol_conf.comm_list);
|
2015-03-24 23:52:41 +08:00
|
|
|
intlist__delete(symbol_conf.tid_list);
|
|
|
|
intlist__delete(symbol_conf.pid_list);
|
2021-02-07 16:09:35 +08:00
|
|
|
intlist__delete(symbol_conf.addr_list);
|
2010-07-31 05:31:28 +08:00
|
|
|
vmlinux_path__exit();
|
|
|
|
symbol_conf.sym_list = symbol_conf.dso_list = symbol_conf.comm_list = NULL;
|
2016-11-26 04:00:21 +08:00
|
|
|
symbol_conf.bt_stop_list = NULL;
|
2010-09-10 00:30:59 +08:00
|
|
|
symbol_conf.initialized = false;
|
2010-07-31 05:31:28 +08:00
|
|
|
}
|
2016-05-19 19:47:37 +08:00
|
|
|
|
|
|
|
int symbol__config_symfs(const struct option *opt __maybe_unused,
|
|
|
|
const char *dir, int unset __maybe_unused)
|
|
|
|
{
|
|
|
|
char *bf = NULL;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
symbol_conf.symfs = strdup(dir);
|
|
|
|
if (symbol_conf.symfs == NULL)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
/* skip the locally configured cache if a symfs is given, and
|
|
|
|
* config buildid dir to symfs/.debug
|
|
|
|
*/
|
|
|
|
ret = asprintf(&bf, "%s/%s", dir, ".debug");
|
|
|
|
if (ret < 0)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
set_buildid_dir(bf);
|
|
|
|
|
|
|
|
free(bf);
|
|
|
|
return 0;
|
|
|
|
}
|
2018-03-07 23:50:06 +08:00
|
|
|
|
|
|
|
struct mem_info *mem_info__get(struct mem_info *mi)
|
|
|
|
{
|
|
|
|
if (mi)
|
|
|
|
refcount_inc(&mi->refcnt);
|
|
|
|
return mi;
|
|
|
|
}
|
|
|
|
|
|
|
|
void mem_info__put(struct mem_info *mi)
|
|
|
|
{
|
|
|
|
if (mi && refcount_dec_and_test(&mi->refcnt))
|
|
|
|
free(mi);
|
|
|
|
}
|
|
|
|
|
|
|
|
struct mem_info *mem_info__new(void)
|
|
|
|
{
|
|
|
|
struct mem_info *mi = zalloc(sizeof(*mi));
|
|
|
|
|
|
|
|
if (mi)
|
|
|
|
refcount_set(&mi->refcnt, 1);
|
|
|
|
return mi;
|
|
|
|
}
|
2021-10-18 21:48:41 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Checks that user supplied symbol kernel files are accessible because
|
|
|
|
* the default mechanism for accessing elf files fails silently. i.e. if
|
|
|
|
* debug syms for a build ID aren't found perf carries on normally. When
|
|
|
|
* they are user supplied we should assume that the user doesn't want to
|
|
|
|
* silently fail.
|
|
|
|
*/
|
|
|
|
int symbol__validate_sym_arguments(void)
|
|
|
|
{
|
|
|
|
if (symbol_conf.vmlinux_name &&
|
|
|
|
access(symbol_conf.vmlinux_name, R_OK)) {
|
|
|
|
pr_err("Invalid file: %s\n", symbol_conf.vmlinux_name);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
if (symbol_conf.kallsyms_name &&
|
|
|
|
access(symbol_conf.kallsyms_name, R_OK)) {
|
|
|
|
pr_err("Invalid file: %s\n", symbol_conf.kallsyms_name);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|