2
0
mirror of https://github.com/edk2-porting/linux-next.git synced 2024-12-29 23:53:55 +08:00
Commit Graph

28 Commits

Author SHA1 Message Date
Sami Tolvanen
4ef57b21d6 recordmcount: support >64k sections
When compiling a kernel with Clang and LTO, we need to run
recordmcount on vmlinux.o with a large number of sections, which
currently fails as the program doesn't understand extended
section indexes. This change adds support for processing binaries
with >64k sections.

Link: https://lkml.kernel.org/r/20200424193046.160744-1-samitolvanen@google.com
Link: https://lore.kernel.org/lkml/CAK7LNARbZhoaA=Nnuw0=gBrkuKbr_4Ng_Ei57uafujZf7Xazgw@mail.gmail.com/

Cc: Kees Cook <keescook@chromium.org>
Reviewed-by: Matt Helsley <mhelsley@vmware.com>
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2020-06-16 21:21:00 -04:00
Steven Rostedt (VMware)
7f8557b88d recordmcount: Fix nop_mcount() function
The removal of the longjmp code in recordmcount.c mistakenly made the return
of make_nop() being negative an exit of nop_mcount(). It should not exit the
routine, but instead just not process that part of the code. By exiting with
an error code, it would cause the update of recordmcount to fail some files
which would fail the build if ftrace function tracing was enabled.

Link: http://lkml.kernel.org/r/20191009110538.5909fec6@gandalf.local.home

Reported-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Tested-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Fixes: 3f1df12019 ("recordmcount: Rewrite error/success handling")
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2019-10-12 20:49:33 -04:00
Matt Helsley
c97fea2625 recordmcount: Remove redundant cleanup() calls
Redundant cleanup calls were introduced when transitioning from
the old error/success handling via setjmp/longjmp -- the longjmp
ensured the cleanup() call only happened once but replacing
the success_file()/fail_file() calls with cleanup() meant that
multiple cleanup() calls can happen as we return from function
calls.

In do_file(), looking just before and after the "goto out" jumps we
can see that multiple cleanups() are being performed. We remove
cleanup() calls from the nested functions because it makes the code
easier to review -- the resources being cleaned up are generally
allocated and initialized in the callers so freeing them there
makes more sense.

Other redundant cleanup() calls:

mmap_file() is only called from do_file() and, if mmap_file() fails,
then we goto out and do cleanup() there too.

write_file() is only called from do_file() and do_file()
calls cleanup() unconditionally after returning from write_file()
therefore the cleanup() calls in write_file() are not necessary.

find_secsym_ndx(), called from do_func()'s for-loop, when we are
cleaning up here it's obvious that we break out of the loop and
do another cleanup().

__has_rel_mcount() is called from two parts of do_func()
and calls cleanup(). In theory we move them into do_func(), however
these in turn prove redundant so another simplification step
removes them as well.

Link: http://lkml.kernel.org/r/de197e17fc5426623a847ea7cf3a1560a7402a4b.1564596289.git.mhelsley@vmware.com

Signed-off-by: Matt Helsley <mhelsley@vmware.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2019-08-31 12:19:40 -04:00
Matt Helsley
3aec863824 recordmcount: Kernel style function signature formatting
The uwrite() and ulseek() functions are formatted inconsistently
with the rest of the file and the kernel overall. While we're
making other changes here let's fix this.

Link: http://lkml.kernel.org/r/4c67698f734be9867a2aba7035fe0ce59e1e4423.1564596289.git.mhelsley@vmware.com

Signed-off-by: Matt Helsley <mhelsley@vmware.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2019-08-31 12:19:39 -04:00
Matt Helsley
3f1df12019 recordmcount: Rewrite error/success handling
Recordmcount uses setjmp/longjmp to manage control flow as
it reads and then writes the ELF file. This unusual control
flow is hard to follow and check in addition to being unlike
kernel coding style.

So we rewrite these paths to use regular return values to
indicate error/success. When an error or previously-completed object
file is found we return an error code following kernel
coding conventions -- negative error values and 0 for success when
we're not returning a pointer. We return NULL for those that fail
and return non-NULL pointers otherwise.

One oddity is already_has_rel_mcount -- there we use pointer comparison
rather than string comparison to differentiate between
previously-processed object files and returning the name of a text
section.

Link: http://lkml.kernel.org/r/8ba8633d4afe444931f363c8d924bf9565b89a86.1564596289.git.mhelsley@vmware.com

Signed-off-by: Matt Helsley <mhelsley@vmware.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2019-08-31 12:19:39 -04:00
Matt Helsley
17e262e995 recordmcount: Remove unused fd from uwrite() and ulseek()
uwrite() works within the pseudo-mapping and extends it as necessary
without needing the file descriptor (fd) parameter passed to it.
Similarly, ulseek() doesn't need its fd parameter. These parameters
were only added because the functions bear a conceptual resemblance
to write() and lseek(). Worse, they obscure the fact that at the time
uwrite() and ulseek() are called fd_map is not a valid file descriptor.

Remove the unused file descriptor parameters that make it look like
fd_map is still valid.

Link: http://lkml.kernel.org/r/2a136e820ee208469d375265c7b8eb28570749a0.1563992889.git.mhelsley@vmware.com

Signed-off-by: Matt Helsley <mhelsley@vmware.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2019-08-31 12:19:38 -04:00
Linus Torvalds
192f0f8e9d powerpc updates for 5.3
Notable changes:
 
  - Removal of the NPU DMA code, used by the out-of-tree Nvidia driver, as well
    as some other functions only used by drivers that haven't (yet?) made it
    upstream.
 
  - A fix for a bug in our handling of hardware watchpoints (eg. perf record -e
    mem: ...) which could lead to register corruption and kernel crashes.
 
  - Enable HAVE_ARCH_HUGE_VMAP, which allows us to use large pages for vmalloc
    when using the Radix MMU.
 
  - A large but incremental rewrite of our exception handling code to use gas
    macros rather than multiple levels of nested CPP macros.
 
 And the usual small fixes, cleanups and improvements.
 
 Thanks to:
   Alastair D'Silva, Alexey Kardashevskiy, Andreas Schwab, Aneesh Kumar K.V, Anju
   T Sudhakar, Anton Blanchard, Arnd Bergmann, Athira Rajeev, Cédric Le Goater,
   Christian Lamparter, Christophe Leroy, Christophe Lombard, Christoph Hellwig,
   Daniel Axtens, Denis Efremov, Enrico Weigelt, Frederic Barrat, Gautham R.
   Shenoy, Geert Uytterhoeven, Geliang Tang, Gen Zhang, Greg Kroah-Hartman, Greg
   Kurz, Gustavo Romero, Krzysztof Kozlowski, Madhavan Srinivasan, Masahiro
   Yamada, Mathieu Malaterre, Michael Neuling, Nathan Lynch, Naveen N. Rao,
   Nicholas Piggin, Nishad Kamdar, Oliver O'Halloran, Qian Cai, Ravi Bangoria,
   Sachin Sant, Sam Bobroff, Satheesh Rajendran, Segher Boessenkool, Shaokun
   Zhang, Shawn Anastasio, Stewart Smith, Suraj Jitindar Singh, Thiago Jung
   Bauermann, YueHaibing.
 -----BEGIN PGP SIGNATURE-----
 
 iQIcBAABAgAGBQJdKVoLAAoJEFHr6jzI4aWA0kIP/A6shIbbE7H5W2hFrqt/PPPK
 3+VrvPKbOFF+W6hcE/RgSZmEnUo0svdNjHUd/eMfFS1vb/uRt2QDdrsHUNNwURQL
 M2mcLXFwYpnjSjb/XMgDbHpAQxjeGfTdYLonUIejN7Rk8KQUeLyKQ3SBn6kfMc46
 DnUUcPcjuRGaETUmVuZZ4e40ZWbJp8PKDrSJOuUrTPXMaK5ciNbZk5mCWXGbYl6G
 BMQAyv4ld/417rNTjBEP/T2foMJtioAt4W6mtlgdkOTdIEZnFU67nNxDBthNSu2c
 95+I+/sML4KOp1R4yhqLSLIDDbc3bg3c99hLGij0d948z3bkSZ8bwnPaUuy70C4v
 U8rvl/+N6C6H3DgSsPE/Gnkd8DnudqWY8nULc+8p3fXljGwww6/Qgt+6yCUn8BdW
 WgixkSjKgjDmzTw8trIUNEqORrTVle7cM2hIyIK2Q5T4kWzNQxrLZ/x/3wgoYjUa
 1KwIzaRo5JKZ9D3pJnJ5U+knE2/90rJIyfcp0W6ygyJsWKi2GNmq1eN3sKOw0IxH
 Tg86RENIA/rEMErNOfP45sLteMuTR7of7peCG3yumIOZqsDVYAzerpvtSgip2cvK
 aG+9HcYlBFOOOF9Dabi8GXsTBLXLfwiyjjLSpA9eXPwW8KObgiNfTZa7ujjTPvis
 4mk9oukFTFUpfhsMmI3T
 =3dBZ
 -----END PGP SIGNATURE-----

Merge tag 'powerpc-5.3-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux

Pull powerpc updates from Michael Ellerman:
 "Notable changes:

   - Removal of the NPU DMA code, used by the out-of-tree Nvidia driver,
     as well as some other functions only used by drivers that haven't
     (yet?) made it upstream.

   - A fix for a bug in our handling of hardware watchpoints (eg. perf
     record -e mem: ...) which could lead to register corruption and
     kernel crashes.

   - Enable HAVE_ARCH_HUGE_VMAP, which allows us to use large pages for
     vmalloc when using the Radix MMU.

   - A large but incremental rewrite of our exception handling code to
     use gas macros rather than multiple levels of nested CPP macros.

  And the usual small fixes, cleanups and improvements.

  Thanks to: Alastair D'Silva, Alexey Kardashevskiy, Andreas Schwab,
  Aneesh Kumar K.V, Anju T Sudhakar, Anton Blanchard, Arnd Bergmann,
  Athira Rajeev, Cédric Le Goater, Christian Lamparter, Christophe
  Leroy, Christophe Lombard, Christoph Hellwig, Daniel Axtens, Denis
  Efremov, Enrico Weigelt, Frederic Barrat, Gautham R. Shenoy, Geert
  Uytterhoeven, Geliang Tang, Gen Zhang, Greg Kroah-Hartman, Greg Kurz,
  Gustavo Romero, Krzysztof Kozlowski, Madhavan Srinivasan, Masahiro
  Yamada, Mathieu Malaterre, Michael Neuling, Nathan Lynch, Naveen N.
  Rao, Nicholas Piggin, Nishad Kamdar, Oliver O'Halloran, Qian Cai, Ravi
  Bangoria, Sachin Sant, Sam Bobroff, Satheesh Rajendran, Segher
  Boessenkool, Shaokun Zhang, Shawn Anastasio, Stewart Smith, Suraj
  Jitindar Singh, Thiago Jung Bauermann, YueHaibing"

* tag 'powerpc-5.3-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (163 commits)
  powerpc/powernv/idle: Fix restore of SPRN_LDBAR for POWER9 stop state.
  powerpc/eeh: Handle hugepages in ioremap space
  ocxl: Update for AFU descriptor template version 1.1
  powerpc/boot: pass CONFIG options in a simpler and more robust way
  powerpc/boot: add {get, put}_unaligned_be32 to xz_config.h
  powerpc/irq: Don't WARN continuously in arch_local_irq_restore()
  powerpc/module64: Use symbolic instructions names.
  powerpc/module32: Use symbolic instructions names.
  powerpc: Move PPC_HA() PPC_HI() and PPC_LO() to ppc-opcode.h
  powerpc/module64: Fix comment in R_PPC64_ENTRY handling
  powerpc/boot: Add lzo support for uImage
  powerpc/boot: Add lzma support for uImage
  powerpc/boot: don't force gzipped uImage
  powerpc/8xx: Add microcode patch to move SMC parameter RAM.
  powerpc/8xx: Use IO accessors in microcode programming.
  powerpc/8xx: replace #ifdefs by IS_ENABLED() in microcode.c
  powerpc/8xx: refactor programming of microcode CPM params.
  powerpc/8xx: refactor printing of microcode patch name.
  powerpc/8xx: Refactor microcode write
  powerpc/8xx: refactor writing of CPM microcode arrays
  ...
2019-07-13 16:08:36 -07:00
Naveen N. Rao
80e5302e4b recordmcount: Fix spurious mcount entries on powerpc
An impending change to enable HAVE_C_RECORDMCOUNT on powerpc leads to
warnings such as the following:

  # modprobe kprobe_example
  ftrace-powerpc: Not expected bl: opcode is 3c4c0001
  WARNING: CPU: 0 PID: 227 at kernel/trace/ftrace.c:2001 ftrace_bug+0x90/0x318
  Modules linked in:
  CPU: 0 PID: 227 Comm: modprobe Not tainted 5.2.0-rc6-00678-g1c329100b942 #2
  NIP:  c000000000264318 LR: c00000000025d694 CTR: c000000000f5cd30
  REGS: c000000001f2b7b0 TRAP: 0700   Not tainted  (5.2.0-rc6-00678-g1c329100b942)
  MSR:  900000010282b033 <SF,HV,VEC,VSX,EE,FP,ME,IR,DR,RI,LE,TM[E]>  CR: 28228222  XER: 00000000
  CFAR: c0000000002642fc IRQMASK: 0
  <snip>
  NIP [c000000000264318] ftrace_bug+0x90/0x318
  LR [c00000000025d694] ftrace_process_locs+0x4f4/0x5e0
  Call Trace:
  [c000000001f2ba40] [0000000000000004] 0x4 (unreliable)
  [c000000001f2bad0] [c00000000025d694] ftrace_process_locs+0x4f4/0x5e0
  [c000000001f2bb90] [c00000000020ff10] load_module+0x25b0/0x30c0
  [c000000001f2bd00] [c000000000210cb0] sys_finit_module+0xc0/0x130
  [c000000001f2be20] [c00000000000bda4] system_call+0x5c/0x70
  Instruction dump:
  419e0018 2f83ffff 419e00bc 2f83ffea 409e00cc 4800001c 0fe00000 3c62ff96
  39000001 39400000 386386d0 480000c4 <0fe00000> 3ce20003 39000001 3c62ff96
  ---[ end trace 4c438d5cebf78381 ]---
  ftrace failed to modify
  [<c0080000012a0008>] 0xc0080000012a0008
   actual:   01:00:4c:3c
  Initializing ftrace call sites
  ftrace record flags: 2000000
   (0)
   expected tramp: c00000000006af4c

Looking at the relocation records in __mcount_loc shows a few spurious
entries:

  RELOCATION RECORDS FOR [__mcount_loc]:
  OFFSET           TYPE              VALUE
  0000000000000000 R_PPC64_ADDR64    .text.unlikely+0x0000000000000008
  0000000000000008 R_PPC64_ADDR64    .text.unlikely+0x0000000000000014
  0000000000000010 R_PPC64_ADDR64    .text.unlikely+0x0000000000000060
  0000000000000018 R_PPC64_ADDR64    .text.unlikely+0x00000000000000b4
  0000000000000020 R_PPC64_ADDR64    .init.text+0x0000000000000008
  0000000000000028 R_PPC64_ADDR64    .init.text+0x0000000000000014

The first entry in each section is incorrect. Looking at the
relocation records, the spurious entries correspond to the
R_PPC64_ENTRY records:

  RELOCATION RECORDS FOR [.text.unlikely]:
  OFFSET           TYPE              VALUE
  0000000000000000 R_PPC64_REL64     .TOC.-0x0000000000000008
  0000000000000008 R_PPC64_ENTRY     *ABS*
  0000000000000014 R_PPC64_REL24     _mcount
  <snip>

The problem is that we are not validating the return value from
get_mcountsym() in sift_rel_mcount(). With this entry, mcountsym is 0,
but Elf_r_sym(relp) also ends up being 0. Fix this by ensuring
mcountsym is valid before processing the entry.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Tested-by: Satheesh Rajendran <sathnaga@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-07-01 16:26:54 +10:00
Thomas Gleixner
4317cf95ca treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 378
Based on 1 normalized pattern(s):

  licensed under the gnu general public license version 2 gplv2

extracted by the scancode license scanner the SPDX license identifier

  GPL-2.0-only

has been chosen to replace the boilerplate/reference in 5 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Armijn Hemel <armijn@tjaldur.nl>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190531081036.993848054@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-05 17:37:10 +02:00
nixiaoming
ac5db1fc89 scripts: Fixed printf format mismatch
scripts/kallsyms.c: function write_src:
"printf", the #1 format specifier "d" need arg type "int",
but the according arg "table_cnt" has type "unsigned int"

scripts/recordmcount.c: function do_file:
"fprintf", the #1 format specifier "d" need arg type "int",
but the according arg "(*w2)(ehdr->e_machine)" has type "unsigned int"

scripts/recordmcount.h: function find_secsym_ndx:
"fprintf", the #1 format specifier "d" need arg type "int",
but the according arg "txtndx" has type "unsigned int"

Signed-off-by: nixiaoming <nixiaoming@huawei.com>
Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2018-05-29 22:04:12 +09:00
libin
c84da8b9ad recordmcount: Fix endianness handling bug for nop_mcount
In nop_mcount, shdr->sh_offset and welp->r_offset should handle
endianness properly, otherwise it will trigger Segmentation fault
if the recordmcount main and file.o have different endianness.

Link: http://lkml.kernel.org/r/563806C7.7070606@huawei.com

Cc: <stable@vger.kernel.org> # 3.0+
Signed-off-by: Li Bin <huawei.libin@huawei.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2015-11-03 10:45:26 -05:00
Alex Smith
91ad11d7cc recordmcount/MIPS: Fix possible incorrect mcount_loc table entries in modules
On MIPS calls to _mcount in modules generate 2 instructions to load
the _mcount address (and therefore 2 relocations). The mcount_loc
table should only reference the first of these, so the second is
filtered out by checking the relocation offset and ignoring ones that
immediately follow the previous one seen.

However if a module has an _mcount call at offset 0, the second
relocation would not be filtered out due to old_r_offset == 0
being taken to mean that the current relocation is the first one
seen, and both would end up in the mcount_loc table.

This results in ftrace_make_nop() patching both (adjacent)
instructions to branches over the _mcount call sequence like so:

  0xffffffffc08a8000:  04 00 00 10     b       0xffffffffc08a8014
  0xffffffffc08a8004:  04 00 00 10     b       0xffffffffc08a8018
  0xffffffffc08a8008:  2d 08 e0 03     move    at,ra
  ...

The second branch is in the delay slot of the first, which is
defined to be unpredictable - on the platform on which this bug was
encountered, it triggers a reserved instruction exception.

Fix by initializing old_r_offset to ~0 and using that instead of 0
to determine whether the current relocation is the first seen.

Signed-off-by: Alex Smith <alex.smith@imgtec.com>
Cc: linux-kernel@vger.kernel.org
Cc: stable@vger.kernel.org
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7098/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-06-26 10:48:19 +01:00
Steven Rostedt
48bb5dc6cd ftrace: Make recordmcount.c handle __fentry__
With gcc 4.6.0 the -mfentry feature places the function profiling
call at the start of the function. When this is used, the call is
to __fentry__ and not mcount.

Change recordmcount.c to record both callers to __fentry__ and
mcount.

Link: http://lkml.kernel.org/r/20120807194058.990674363@goodmis.org

Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Acked-by: John Reiser <jreiser@bitwagon.com>
Acked-by: Ingo Molnar <mingo@kernel.org>
Cc: Andi Kleen <andi@firstfloor.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2012-08-23 11:24:43 -04:00
David Daney
2e885057b7 recordmcount: Fix handling of elf64 big-endian objects.
In ELF64, the sh_flags field is 64-bits wide.  recordmcount was
erroneously treating it as a 32-bit wide field.  For little endian
objects this works because the flags of interest (SHF_EXECINSTR)
reside in the lower 32 bits of the word, and you get the same result
with either a 32-bit or 64-bit read.  Big endian objects on the
other hand do not work at all with this error.

The fix:  Correctly treat sh_flags as 64-bits wide in elf64 objects.

The symptom I observed was that my
__start_mcount_loc..__stop_mcount_loc was empty even though ftrace
function tracing was enabled.

Link: http://lkml.kernel.org/r/1324345362-12230-1-git-send-email-ddaney.cavm@gmail.com

Cc: stable@kernel.org # 3.0+
Signed-off-by: David Daney <david.daney@cavium.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2012-01-06 17:06:42 -05:00
Rabin Vincent
9905ce8ad7 ftrace/recordmcount: Avoid STT_FUNC symbols as base on ARM
While find_secsym_ndx often finds the unamed local STT_SECTION, if a
section has only one function in it, the ARM toolchain generates the
STT_FUNC symbol before the STT_SECTION, and recordmcount finds this
instead.

This is problematic on ARM because in ARM ELFs, "if a [STT_FUNC] symbol
addresses a Thumb instruction, its value is the address of the
instruction with bit zero set (in a relocatable object, the section
offset with bit zero set)".  This leads to incorrect mcount addresses
being recorded.

Fix this by not using STT_FUNC symbols as the base on ARM.

Signed-off-by: Rabin Vincent <rabin@rab.in>
Link: http://lkml.kernel.org/r/1305134631-31617-1-git-send-email-rabin@rab.in
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-25 19:56:33 -04:00
Martin Schwidefsky
07d8b595f3 ftrace/recordmcount: mcount address adjustment
Introduce mcount_adjust{,_32,_64} to the C implementation of
recordmcount analog to $mcount_adjust in the perl script.
The adjustment is added to the address of the relocations
against the mcount symbol. If this adjustment is done by
recordmcount at compile time the ftrace_call_adjust function
can be turned into a nop.

Cc: John Reiser <jreiser@bitwagon.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-16 14:53:22 -04:00
Steven Rostedt
41b402a201 ftrace/recordmcount: Add helper function get_sym_str_and_relp()
The code to get the symbol, string, and relp pointers in the two functions
sift_rel_mcount() and nop_mcount() are identical and also non-trivial.
Moving this duplicate code into a single helper function makes the code
easier to read and more maintainable.

Cc: John Reiser <jreiser@bitwagon.com>
Link: http://lkml.kernel.org/r/20110421023739.723658553@goodmis.org
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-16 14:48:55 -04:00
Steven Rostedt
37762cb997 ftrace/recordmcount: Remove duplicate code to find mcount symbol
The code in sift_rel_mcount() and nop_mcount() to get the mcount symbol
number is identical. Replace the two locations with a call to a function
that does the work.

Cc: John Reiser <jreiser@bitwagon.com>
Link: http://lkml.kernel.org/r/20110421023739.488093407@goodmis.org
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-16 14:48:02 -04:00
Steven Rostedt
dfad3d598c ftrace/recordmcount: Add warning logic to warn on mcount not recorded
There's some sections that should not have mcount recorded and should not have
modifications to the that code. But currently they waste some time by calling
mcount anyway (which simply returns). As the real answer should be to
either whitelist the section or have gcc ignore it fully.

This change adds a option to recordmcount to warn when it finds a section
that is ignored by ftrace but still contains mcount callers. This is not on
by default as developers may not know if the section should be completely
ignored or added to the whitelist.

Cc: John Reiser <jreiser@bitwagon.com>
Link: http://lkml.kernel.org/r/20110421023738.476989377@goodmis.org
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-16 14:44:20 -04:00
Steven Rostedt
ffd618fa39 ftrace/recordmcount: Make ignored mcount calls into nops at compile time
There are sections that are ignored by ftrace for the function tracing because
the text is in a section that can be removed without notice. The mcount calls
in these sections are ignored and ftrace never sees them. The downside of this
is that the functions in these sections still call mcount. Although the mcount
function is defined in assembly simply as a return, this added overhead is
unnecessary.

The solution is to convert these callers into nops at compile time.
A better solution is to add 'notrace' to the section markers, but as new sections
come up all the time, it would be nice that they are delt with when they
are created.

Later patches will deal with finding these sections and doing the proper solution.

Thanks to H. Peter Anvin for giving me the right nops to use for x86.

Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: John Reiser <jreiser@bitwagon.com>
Link: http://lkml.kernel.org/r/20110421023738.237101176@goodmis.org
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-16 14:43:32 -04:00
Steven Rostedt
8abd5724a7 ftrace/recordmcount: Modify only executable sections
PROGBITS is not enough to determine if the section should be modified
or not. Only process sections that are marked as executable.

Cc: John Reiser <jreiser@bitwagon.com>
Link: http://lkml.kernel.org/r/20110421023737.991485123@goodmis.org
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-16 14:42:56 -04:00
Steven Rostedt
dd5477ff3b ftrace/trivial: Clean up recordmcount.c to use Linux style comparisons
The Linux ftrace subsystem style for comparing is:

  var == 1
  var > 0

and not:

  1 == var
  0 < var

It is considered that Linux developers are smart enough not to do the

  if (var = 1)

mistake.

Cc: John Reiser <jreiser@bitwagon.com>
Link: http://lkml.kernel.org/r/20110421023737.290712238@goodmis.org
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-16 14:38:51 -04:00
Russell King
31edf274f9 Merge branches 'ftrace', 'gic', 'io', 'kexec', 'mod', 'sa11x0', 'sh' and 'versatile' into devel 2011-01-05 18:08:10 +00:00
Rabin Vincent
ed60453fa8 ARM: 6511/1: ftrace: add ARM support for C version of recordmcount
Depending on the compiler version, ARM GCC calls the mcount function
either __gnu_mcount_nc or mcount.

Acked-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-04 11:30:27 +00:00
John Reiser
e63233f75a ftrace: Have recordmcount honor endianness in fn_ELF_R_INFO
It looks to me like the change which introduced "virtual functions"
forgot about cross-platform endianness.

Thank you to Arnaud for supplying before+after data files do_mounts*.o.

This fixes a MIPS build failure triggered by recordmcount.

Reported-by: Arnaud Lacombe <lacombar@gmail.com>
Tested-by: Arnaud Lacombe <lacombar@gmail.com>
Acked-by: Wu Zhangjin <wuzhangjin@gmail.com>
Acked-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: John Reiser <jreiser@BitWagon.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2010-12-02 10:37:48 -05:00
Wu Zhangjin
412910cd04 ftrace/MIPS: Add module support for C version of recordmcount
Since MIPS modules' address space differs from the core kernel space, to access
the _mcount in the core kernel, the kernel functions in modules must use long
call (-mlong-calls): load the _mcount address into one register and jump to the
address stored by the register:

 c:  3c030000        lui     v1,0x0  <-------->  b label
           c: R_MIPS_HI16  _mcount
           c: R_MIPS_NONE  *ABS*
           c: R_MIPS_NONE  *ABS*
10:  64630000        daddiu  v1,v1,0
          10: R_MIPS_LO16 _mcount
          10: R_MIPS_NONE *ABS*
          10: R_MIPS_NONE *ABS*
14:	03e0082d 	move	at,ra
18:	0060f809 	jalr	v1
label:

In the old Perl version of recordmcount, we only need to record the position of
the 1st R_MIPS_HI16 type of _mcount, and later, in ftrace_make_nop(), replace
the instruction in this position by a "b label" and in ftrace_make_call(),
replace it back.

But, the default C version of recordmcount records all of the _mcount symbols,
so, we must filter the 2nd _mcount like the Perl version of recordmcount does.

The C version of recordmcount copes with the symbols before they are linked, So
It doesn't know the type of the symbols and therefore can not filter the
symbols as the Perl version of recordmcount does. But as we can see above, the
2nd _mcount symbols of the long call alawys follows the 1st _mcount symbol of
the same long call, which means the offset from the 1st to the 2nd is fixed, it
is 0x10-0xc = 4 here, 4 is the length of the 1st load instruciton, for MIPS has
fixed length of instructions, this offset is always 4.

And as we know, the _mcount is inserted into the entry of every kernel
function, the offset between the other _mcount's is expected to be always
bigger than 4. So, to filter the 2ns _mcount symbol of the long call, we can
simply check the offset between two _mcount symbols, If it is 4, then, filter
the 2nd _mcount symbol.

To avoid touching too much code, an 'empty' function fn_is_fake_mcount() is
added for all of the archs, and the specific archs can override it via chaning
the function pointer: is_fake_mcount in do_file() with the e_machine. e.g. This
patch adds MIPS_is_fake_mcount() to override the default fn_is_fake_mcount()
pointed by is_fake_mcount.

This fn_is_fake_mcount() checks if the _mcount symbol is fake, e.g. the 2nd
_mcount symbol of the long call is fake, for there are 2 _mcount symbols mapped
to one real mcount call, so, one of them is fake and must be filtered.

This fn_is_fake_mcount() is called in sift_rel_mcount() after finding the
_mcount symbols and before adding the _mcount symbol into mrelp, so, it can
prevent the fake mcount symbol going into the last __mcount_loc table.

Signed-off-by: Wu Zhangjin <wuzhangjin@gmail.com>
LKML-Reference: <b866f0138224340a132d31861fa3f9300dee30ac.1288176026.git.wuzhangjin@gmail.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2010-10-29 19:08:55 +01:00
John Reiser
a2d49358ba ftrace/MIPS: Add MIPS64 support for C version of recordmcount
MIPS64 has 'weird' Elf64_Rel.r_info[1,2], which must be used instead of
the generic Elf64_Rel.r_info, otherwise, the C version of recordmcount
will not work for "segmentation fault".

Usage of "union mips_r_info" and the functions MIPS64_r_sym() and
MIPS64_r_info() written by Maciej W. Rozycki <macro@linux-mips.org>

----
[1] http://techpubs.sgi.com/library/manuals/4000/007-4658-001/pdf/007-4658-001.pdf
[2] arch/mips/include/asm/module.h

Tested-by: Wu Zhangjin <wuzhangjin@gmail.com>
Signed-off-by: John Reiser <jreiser@BitWagon.com>
Signed-off-by: Maciej W. Rozycki <macro@linux-mips.org>
LKML-Reference: <AANLkTinwXjLAYACUfhLYaocHD_vBbiErLN3NjwN8JqSy@mail.gmail.com>
LKML-Reference: <910dc2d5ae1ed042df4f96815fe4a433078d1c2a.1288176026.git.wuzhangjin@gmail.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2010-10-29 19:08:54 +01:00
Steven Rostedt
c28d5077f8 ftrace: Remove duplicate code for 64 and 32 bit in recordmcount.c
The elf reader for recordmcount.c had duplicate functions for both
32 bit and 64 bit elf handling. This was due to the need of using
the 32 and 64 bit elf structures.

This patch consolidates the two by using macros to define the 32
and 64 bit names in a recordmcount.h file, and then by just defining
a RECORD_MCOUNT_64 macro and including recordmcount.h twice we
create the funtions for both the 32 bit version as well as the
64 bit version using one code source.

Cc: John Reiser <jreiser@bitwagon.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2010-10-14 16:54:00 -04:00