2
0
mirror of https://github.com/edk2-porting/linux-next.git synced 2024-12-12 23:33:55 +08:00
linux-next/kernel/extable.c

157 lines
4.3 KiB
C
Raw Normal View History

/* Rewritten by Rusty Russell, on the backs of many others...
Copyright (C) 2001 Rusty Russell, 2002 Rusty Russell IBM.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
#include <linux/ftrace.h>
#include <linux/memory.h>
#include <linux/extable.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/init.h>
kprobes, extable: Identify kprobes trampolines as kernel text area Improve __kernel_text_address()/kernel_text_address() to return true if the given address is on a kprobe's instruction slot trampoline. This can help stacktraces to determine the address is on a text area or not. To implement this atomically in is_kprobe_*_slot(), also change the insn_cache page list to an RCU list. This changes timings a bit (it delays page freeing to the RCU garbage collection phase), but none of that is in the hot path. Note: this change can add small overhead to stack unwinders because it adds 2 additional checks to __kernel_text_address(). However, the impact should be very small, because kprobe_insn_pages list has 1 entry per 256 probes(on x86, on arm/arm64 it will be 1024 probes), and kprobe_optinsn_pages has 1 entry per 32 probes(on x86). In most use cases, the number of kprobe events may be less than 20, which means that is_kprobe_*_slot() will check just one entry. Tested-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Acked-by: Peter Zijlstra <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andrey Konovalov <andreyknvl@google.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/148388747896.6869.6354262871751682264.stgit@devbox [ Improved the changelog and coding style. ] Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-01-08 22:58:09 +08:00
#include <linux/kprobes.h>
bpf: make jited programs visible in traces Long standing issue with JITed programs is that stack traces from function tracing check whether a given address is kernel code through {__,}kernel_text_address(), which checks for code in core kernel, modules and dynamically allocated ftrace trampolines. But what is still missing is BPF JITed programs (interpreted programs are not an issue as __bpf_prog_run() will be attributed to them), thus when a stack trace is triggered, the code walking the stack won't see any of the JITed ones. The same for address correlation done from user space via reading /proc/kallsyms. This is read by tools like perf, but the latter is also useful for permanent live tracing with eBPF itself in combination with stack maps when other eBPF types are part of the callchain. See offwaketime example on dumping stack from a map. This work tries to tackle that issue by making the addresses and symbols known to the kernel. The lookup from *kernel_text_address() is implemented through a latched RB tree that can be read under RCU in fast-path that is also shared for symbol/size/offset lookup for a specific given address in kallsyms. The slow-path iteration through all symbols in the seq file done via RCU list, which holds a tiny fraction of all exported ksyms, usually below 0.1 percent. Function symbols are exported as bpf_prog_<tag>, in order to aide debugging and attribution. This facility is currently enabled for root-only when bpf_jit_kallsyms is set to 1, and disabled if hardening is active in any mode. The rationale behind this is that still a lot of systems ship with world read permissions on kallsyms thus addresses should not get suddenly exposed for them. If that situation gets much better in future, we always have the option to change the default on this. Likewise, unprivileged programs are not allowed to add entries there either, but that is less of a concern as most such programs types relevant in this context are for root-only anyway. If enabled, call graphs and stack traces will then show a correct attribution; one example is illustrated below, where the trace is now visible in tooling such as perf script --kallsyms=/proc/kallsyms and friends. Before: 7fff8166889d bpf_clone_redirect+0x80007f0020ed (/lib/modules/4.9.0-rc8+/build/vmlinux) f5d80 __sendmsg_nocancel+0xffff006451f1a007 (/usr/lib64/libc-2.18.so) After: 7fff816688b7 bpf_clone_redirect+0x80007f002107 (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fffa0575728 bpf_prog_33c45a467c9e061a+0x8000600020fb (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fffa07ef1fc cls_bpf_classify+0x8000600020dc (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff81678b68 tc_classify+0x80007f002078 (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff8164d40b __netif_receive_skb_core+0x80007f0025fb (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff8164d718 __netif_receive_skb+0x80007f002018 (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff8164e565 process_backlog+0x80007f002095 (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff8164dc71 net_rx_action+0x80007f002231 (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff81767461 __softirqentry_text_start+0x80007f0020d1 (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff817658ac do_softirq_own_stack+0x80007f00201c (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff810a2c20 do_softirq+0x80007f002050 (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff810a2cb5 __local_bh_enable_ip+0x80007f002085 (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff8168d452 ip_finish_output2+0x80007f002152 (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff8168ea3d ip_finish_output+0x80007f00217d (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff8168f2af ip_output+0x80007f00203f (/lib/modules/4.9.0-rc8+/build/vmlinux) [...] 7fff81005854 do_syscall_64+0x80007f002054 (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff817649eb return_from_SYSCALL_64+0x80007f002000 (/lib/modules/4.9.0-rc8+/build/vmlinux) f5d80 __sendmsg_nocancel+0xffff01c484812007 (/usr/lib64/libc-2.18.so) Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-17 05:24:50 +08:00
#include <linux/filter.h>
#include <asm/sections.h>
#include <linux/uaccess.h>
/*
* mutex protecting text section modification (dynamic code patching).
* some users need to sleep (allocating memory...) while they hold this lock.
*
* NOT exported to modules - patching kernel text is a really delicate matter.
*/
DEFINE_MUTEX(text_mutex);
extern struct exception_table_entry __start___ex_table[];
extern struct exception_table_entry __stop___ex_table[];
/* Cleared by build time tools if the table is already sorted. */
u32 __initdata __visible main_extable_sort_needed = 1;
/* Sort the kernel's built-in exception table */
void __init sort_main_extable(void)
{
if (main_extable_sort_needed && __stop___ex_table > __start___ex_table) {
pr_notice("Sorting __ex_table...\n");
sort_extable(__start___ex_table, __stop___ex_table);
}
}
/* Given an address, look for it in the exception tables. */
const struct exception_table_entry *search_exception_tables(unsigned long addr)
{
const struct exception_table_entry *e;
e = search_extable(__start___ex_table, __stop___ex_table-1, addr);
if (!e)
e = search_module_extables(addr);
return e;
}
symbols, stacktrace: look up init symbols after module symbols Impact: fix incomplete stacktraces I noticed such weird stacktrace entries in lockdep dumps: [ 0.285956] {HARDIRQ-ON-W} state was registered at: [ 0.285956] [<ffffffff802bce90>] mark_irqflags+0xbe/0x125 [ 0.285956] [<ffffffff802bf2fd>] __lock_acquire+0x674/0x82d [ 0.285956] [<ffffffff802bf5b2>] lock_acquire+0xfc/0x128 [ 0.285956] [<ffffffff8135b636>] rt_spin_lock+0xc8/0xd0 [ 0.285956] [<ffffffffffffffff>] 0xffffffffffffffff The stacktrace entry is cut off after rt_spin_lock. After much debugging i found out that stacktrace entries that belong to init symbols dont get printed out, due to commit: a2da405: module: Don't report discarded init pages as kernel text. The reason is this check added to core_kernel_text(): - if (addr >= (unsigned long)_sinittext && + if (system_state == SYSTEM_BOOTING && + addr >= (unsigned long)_sinittext && addr <= (unsigned long)_einittext) return 1; This will discard inittext symbols even though their symbol table is still present and even though stacktraces done while the system was booting up might still be relevant. To not reintroduce the (not well-specified) bug addressed in that commit, first do a module symbols lookup, then a final init-symbols lookup. This will work fine on architectures that have separate address spaces for modules (such as x86) - and should not crash any other architectures either. Acked-by: Peter Zijlstra <peterz@infradead.org> Cc: Rusty Russell <rusty@rustcorp.com.au> LKML-Reference: <new-discussion> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-03-19 20:21:44 +08:00
static inline int init_kernel_text(unsigned long addr)
{
if (addr >= (unsigned long)_sinittext &&
addr < (unsigned long)_einittext)
symbols, stacktrace: look up init symbols after module symbols Impact: fix incomplete stacktraces I noticed such weird stacktrace entries in lockdep dumps: [ 0.285956] {HARDIRQ-ON-W} state was registered at: [ 0.285956] [<ffffffff802bce90>] mark_irqflags+0xbe/0x125 [ 0.285956] [<ffffffff802bf2fd>] __lock_acquire+0x674/0x82d [ 0.285956] [<ffffffff802bf5b2>] lock_acquire+0xfc/0x128 [ 0.285956] [<ffffffff8135b636>] rt_spin_lock+0xc8/0xd0 [ 0.285956] [<ffffffffffffffff>] 0xffffffffffffffff The stacktrace entry is cut off after rt_spin_lock. After much debugging i found out that stacktrace entries that belong to init symbols dont get printed out, due to commit: a2da405: module: Don't report discarded init pages as kernel text. The reason is this check added to core_kernel_text(): - if (addr >= (unsigned long)_sinittext && + if (system_state == SYSTEM_BOOTING && + addr >= (unsigned long)_sinittext && addr <= (unsigned long)_einittext) return 1; This will discard inittext symbols even though their symbol table is still present and even though stacktraces done while the system was booting up might still be relevant. To not reintroduce the (not well-specified) bug addressed in that commit, first do a module symbols lookup, then a final init-symbols lookup. This will work fine on architectures that have separate address spaces for modules (such as x86) - and should not crash any other architectures either. Acked-by: Peter Zijlstra <peterz@infradead.org> Cc: Rusty Russell <rusty@rustcorp.com.au> LKML-Reference: <new-discussion> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-03-19 20:21:44 +08:00
return 1;
return 0;
}
int core_kernel_text(unsigned long addr)
{
if (addr >= (unsigned long)_stext &&
addr < (unsigned long)_etext)
return 1;
if (system_state == SYSTEM_BOOTING &&
symbols, stacktrace: look up init symbols after module symbols Impact: fix incomplete stacktraces I noticed such weird stacktrace entries in lockdep dumps: [ 0.285956] {HARDIRQ-ON-W} state was registered at: [ 0.285956] [<ffffffff802bce90>] mark_irqflags+0xbe/0x125 [ 0.285956] [<ffffffff802bf2fd>] __lock_acquire+0x674/0x82d [ 0.285956] [<ffffffff802bf5b2>] lock_acquire+0xfc/0x128 [ 0.285956] [<ffffffff8135b636>] rt_spin_lock+0xc8/0xd0 [ 0.285956] [<ffffffffffffffff>] 0xffffffffffffffff The stacktrace entry is cut off after rt_spin_lock. After much debugging i found out that stacktrace entries that belong to init symbols dont get printed out, due to commit: a2da405: module: Don't report discarded init pages as kernel text. The reason is this check added to core_kernel_text(): - if (addr >= (unsigned long)_sinittext && + if (system_state == SYSTEM_BOOTING && + addr >= (unsigned long)_sinittext && addr <= (unsigned long)_einittext) return 1; This will discard inittext symbols even though their symbol table is still present and even though stacktraces done while the system was booting up might still be relevant. To not reintroduce the (not well-specified) bug addressed in that commit, first do a module symbols lookup, then a final init-symbols lookup. This will work fine on architectures that have separate address spaces for modules (such as x86) - and should not crash any other architectures either. Acked-by: Peter Zijlstra <peterz@infradead.org> Cc: Rusty Russell <rusty@rustcorp.com.au> LKML-Reference: <new-discussion> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-03-19 20:21:44 +08:00
init_kernel_text(addr))
return 1;
return 0;
}
extable, core_kernel_data(): Make sure all archs define _sdata A new utility function (core_kernel_data()) is used to determine if a passed in address is part of core kernel data or not. It may or may not return true for RO data, but this utility must work for RW data. Thus both _sdata and _edata must be defined and continuous, without .init sections that may later be freed and replaced by volatile memory (memory that can be freed). This utility function is used to determine if data is safe from ever being freed. Thus it should return true for all RW global data that is not in a module or has been allocated, or false otherwise. Also change core_kernel_data() back to the more precise _sdata condition and document the function. Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Acked-by: Ralf Baechle <ralf@linux-mips.org> Acked-by: Hirokazu Takata <takata@linux-m32r.org> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Roman Zippel <zippel@linux-m68k.org> Cc: linux-m68k@lists.linux-m68k.org Cc: Kyle McMartin <kyle@mcmartin.ca> Cc: Helge Deller <deller@gmx.de> Cc: JamesE.J.Bottomley <jejb@parisc-linux.org> Link: http://lkml.kernel.org/r/1305855298.1465.19.camel@gandalf.stny.rr.com Signed-off-by: Ingo Molnar <mingo@elte.hu> ---- arch/alpha/kernel/vmlinux.lds.S | 1 + arch/m32r/kernel/vmlinux.lds.S | 1 + arch/m68k/kernel/vmlinux-std.lds | 2 ++ arch/m68k/kernel/vmlinux-sun3.lds | 1 + arch/mips/kernel/vmlinux.lds.S | 1 + arch/parisc/kernel/vmlinux.lds.S | 3 +++ kernel/extable.c | 12 +++++++++++- 7 files changed, 20 insertions(+), 1 deletion(-)
2011-05-20 09:34:58 +08:00
/**
* core_kernel_data - tell if addr points to kernel data
* @addr: address to test
*
* Returns true if @addr passed in is from the core kernel data
* section.
*
* Note: On some archs it may return true for core RODATA, and false
* for others. But will always be true for core RW data.
*/
int core_kernel_data(unsigned long addr)
{
extable, core_kernel_data(): Make sure all archs define _sdata A new utility function (core_kernel_data()) is used to determine if a passed in address is part of core kernel data or not. It may or may not return true for RO data, but this utility must work for RW data. Thus both _sdata and _edata must be defined and continuous, without .init sections that may later be freed and replaced by volatile memory (memory that can be freed). This utility function is used to determine if data is safe from ever being freed. Thus it should return true for all RW global data that is not in a module or has been allocated, or false otherwise. Also change core_kernel_data() back to the more precise _sdata condition and document the function. Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Acked-by: Ralf Baechle <ralf@linux-mips.org> Acked-by: Hirokazu Takata <takata@linux-m32r.org> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Roman Zippel <zippel@linux-m68k.org> Cc: linux-m68k@lists.linux-m68k.org Cc: Kyle McMartin <kyle@mcmartin.ca> Cc: Helge Deller <deller@gmx.de> Cc: JamesE.J.Bottomley <jejb@parisc-linux.org> Link: http://lkml.kernel.org/r/1305855298.1465.19.camel@gandalf.stny.rr.com Signed-off-by: Ingo Molnar <mingo@elte.hu> ---- arch/alpha/kernel/vmlinux.lds.S | 1 + arch/m32r/kernel/vmlinux.lds.S | 1 + arch/m68k/kernel/vmlinux-std.lds | 2 ++ arch/m68k/kernel/vmlinux-sun3.lds | 1 + arch/mips/kernel/vmlinux.lds.S | 1 + arch/parisc/kernel/vmlinux.lds.S | 3 +++ kernel/extable.c | 12 +++++++++++- 7 files changed, 20 insertions(+), 1 deletion(-)
2011-05-20 09:34:58 +08:00
if (addr >= (unsigned long)_sdata &&
addr < (unsigned long)_edata)
return 1;
return 0;
}
int __kernel_text_address(unsigned long addr)
{
if (core_kernel_text(addr))
return 1;
if (is_module_text_address(addr))
symbols, stacktrace: look up init symbols after module symbols Impact: fix incomplete stacktraces I noticed such weird stacktrace entries in lockdep dumps: [ 0.285956] {HARDIRQ-ON-W} state was registered at: [ 0.285956] [<ffffffff802bce90>] mark_irqflags+0xbe/0x125 [ 0.285956] [<ffffffff802bf2fd>] __lock_acquire+0x674/0x82d [ 0.285956] [<ffffffff802bf5b2>] lock_acquire+0xfc/0x128 [ 0.285956] [<ffffffff8135b636>] rt_spin_lock+0xc8/0xd0 [ 0.285956] [<ffffffffffffffff>] 0xffffffffffffffff The stacktrace entry is cut off after rt_spin_lock. After much debugging i found out that stacktrace entries that belong to init symbols dont get printed out, due to commit: a2da405: module: Don't report discarded init pages as kernel text. The reason is this check added to core_kernel_text(): - if (addr >= (unsigned long)_sinittext && + if (system_state == SYSTEM_BOOTING && + addr >= (unsigned long)_sinittext && addr <= (unsigned long)_einittext) return 1; This will discard inittext symbols even though their symbol table is still present and even though stacktraces done while the system was booting up might still be relevant. To not reintroduce the (not well-specified) bug addressed in that commit, first do a module symbols lookup, then a final init-symbols lookup. This will work fine on architectures that have separate address spaces for modules (such as x86) - and should not crash any other architectures either. Acked-by: Peter Zijlstra <peterz@infradead.org> Cc: Rusty Russell <rusty@rustcorp.com.au> LKML-Reference: <new-discussion> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-03-19 20:21:44 +08:00
return 1;
if (is_ftrace_trampoline(addr))
return 1;
kprobes, extable: Identify kprobes trampolines as kernel text area Improve __kernel_text_address()/kernel_text_address() to return true if the given address is on a kprobe's instruction slot trampoline. This can help stacktraces to determine the address is on a text area or not. To implement this atomically in is_kprobe_*_slot(), also change the insn_cache page list to an RCU list. This changes timings a bit (it delays page freeing to the RCU garbage collection phase), but none of that is in the hot path. Note: this change can add small overhead to stack unwinders because it adds 2 additional checks to __kernel_text_address(). However, the impact should be very small, because kprobe_insn_pages list has 1 entry per 256 probes(on x86, on arm/arm64 it will be 1024 probes), and kprobe_optinsn_pages has 1 entry per 32 probes(on x86). In most use cases, the number of kprobe events may be less than 20, which means that is_kprobe_*_slot() will check just one entry. Tested-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Acked-by: Peter Zijlstra <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andrey Konovalov <andreyknvl@google.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/148388747896.6869.6354262871751682264.stgit@devbox [ Improved the changelog and coding style. ] Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-01-08 22:58:09 +08:00
if (is_kprobe_optinsn_slot(addr) || is_kprobe_insn_slot(addr))
return 1;
bpf: make jited programs visible in traces Long standing issue with JITed programs is that stack traces from function tracing check whether a given address is kernel code through {__,}kernel_text_address(), which checks for code in core kernel, modules and dynamically allocated ftrace trampolines. But what is still missing is BPF JITed programs (interpreted programs are not an issue as __bpf_prog_run() will be attributed to them), thus when a stack trace is triggered, the code walking the stack won't see any of the JITed ones. The same for address correlation done from user space via reading /proc/kallsyms. This is read by tools like perf, but the latter is also useful for permanent live tracing with eBPF itself in combination with stack maps when other eBPF types are part of the callchain. See offwaketime example on dumping stack from a map. This work tries to tackle that issue by making the addresses and symbols known to the kernel. The lookup from *kernel_text_address() is implemented through a latched RB tree that can be read under RCU in fast-path that is also shared for symbol/size/offset lookup for a specific given address in kallsyms. The slow-path iteration through all symbols in the seq file done via RCU list, which holds a tiny fraction of all exported ksyms, usually below 0.1 percent. Function symbols are exported as bpf_prog_<tag>, in order to aide debugging and attribution. This facility is currently enabled for root-only when bpf_jit_kallsyms is set to 1, and disabled if hardening is active in any mode. The rationale behind this is that still a lot of systems ship with world read permissions on kallsyms thus addresses should not get suddenly exposed for them. If that situation gets much better in future, we always have the option to change the default on this. Likewise, unprivileged programs are not allowed to add entries there either, but that is less of a concern as most such programs types relevant in this context are for root-only anyway. If enabled, call graphs and stack traces will then show a correct attribution; one example is illustrated below, where the trace is now visible in tooling such as perf script --kallsyms=/proc/kallsyms and friends. Before: 7fff8166889d bpf_clone_redirect+0x80007f0020ed (/lib/modules/4.9.0-rc8+/build/vmlinux) f5d80 __sendmsg_nocancel+0xffff006451f1a007 (/usr/lib64/libc-2.18.so) After: 7fff816688b7 bpf_clone_redirect+0x80007f002107 (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fffa0575728 bpf_prog_33c45a467c9e061a+0x8000600020fb (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fffa07ef1fc cls_bpf_classify+0x8000600020dc (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff81678b68 tc_classify+0x80007f002078 (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff8164d40b __netif_receive_skb_core+0x80007f0025fb (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff8164d718 __netif_receive_skb+0x80007f002018 (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff8164e565 process_backlog+0x80007f002095 (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff8164dc71 net_rx_action+0x80007f002231 (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff81767461 __softirqentry_text_start+0x80007f0020d1 (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff817658ac do_softirq_own_stack+0x80007f00201c (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff810a2c20 do_softirq+0x80007f002050 (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff810a2cb5 __local_bh_enable_ip+0x80007f002085 (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff8168d452 ip_finish_output2+0x80007f002152 (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff8168ea3d ip_finish_output+0x80007f00217d (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff8168f2af ip_output+0x80007f00203f (/lib/modules/4.9.0-rc8+/build/vmlinux) [...] 7fff81005854 do_syscall_64+0x80007f002054 (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff817649eb return_from_SYSCALL_64+0x80007f002000 (/lib/modules/4.9.0-rc8+/build/vmlinux) f5d80 __sendmsg_nocancel+0xffff01c484812007 (/usr/lib64/libc-2.18.so) Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-17 05:24:50 +08:00
if (is_bpf_text_address(addr))
return 1;
symbols, stacktrace: look up init symbols after module symbols Impact: fix incomplete stacktraces I noticed such weird stacktrace entries in lockdep dumps: [ 0.285956] {HARDIRQ-ON-W} state was registered at: [ 0.285956] [<ffffffff802bce90>] mark_irqflags+0xbe/0x125 [ 0.285956] [<ffffffff802bf2fd>] __lock_acquire+0x674/0x82d [ 0.285956] [<ffffffff802bf5b2>] lock_acquire+0xfc/0x128 [ 0.285956] [<ffffffff8135b636>] rt_spin_lock+0xc8/0xd0 [ 0.285956] [<ffffffffffffffff>] 0xffffffffffffffff The stacktrace entry is cut off after rt_spin_lock. After much debugging i found out that stacktrace entries that belong to init symbols dont get printed out, due to commit: a2da405: module: Don't report discarded init pages as kernel text. The reason is this check added to core_kernel_text(): - if (addr >= (unsigned long)_sinittext && + if (system_state == SYSTEM_BOOTING && + addr >= (unsigned long)_sinittext && addr <= (unsigned long)_einittext) return 1; This will discard inittext symbols even though their symbol table is still present and even though stacktraces done while the system was booting up might still be relevant. To not reintroduce the (not well-specified) bug addressed in that commit, first do a module symbols lookup, then a final init-symbols lookup. This will work fine on architectures that have separate address spaces for modules (such as x86) - and should not crash any other architectures either. Acked-by: Peter Zijlstra <peterz@infradead.org> Cc: Rusty Russell <rusty@rustcorp.com.au> LKML-Reference: <new-discussion> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-03-19 20:21:44 +08:00
/*
* There might be init symbols in saved stacktraces.
* Give those symbols a chance to be printed in
* backtraces (such as lockdep traces).
*
* Since we are after the module-symbols check, there's
* no danger of address overlap:
*/
if (init_kernel_text(addr))
return 1;
return 0;
}
int kernel_text_address(unsigned long addr)
{
if (core_kernel_text(addr))
return 1;
if (is_module_text_address(addr))
return 1;
kprobes, extable: Identify kprobes trampolines as kernel text area Improve __kernel_text_address()/kernel_text_address() to return true if the given address is on a kprobe's instruction slot trampoline. This can help stacktraces to determine the address is on a text area or not. To implement this atomically in is_kprobe_*_slot(), also change the insn_cache page list to an RCU list. This changes timings a bit (it delays page freeing to the RCU garbage collection phase), but none of that is in the hot path. Note: this change can add small overhead to stack unwinders because it adds 2 additional checks to __kernel_text_address(). However, the impact should be very small, because kprobe_insn_pages list has 1 entry per 256 probes(on x86, on arm/arm64 it will be 1024 probes), and kprobe_optinsn_pages has 1 entry per 32 probes(on x86). In most use cases, the number of kprobe events may be less than 20, which means that is_kprobe_*_slot() will check just one entry. Tested-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Acked-by: Peter Zijlstra <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andrey Konovalov <andreyknvl@google.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/148388747896.6869.6354262871751682264.stgit@devbox [ Improved the changelog and coding style. ] Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-01-08 22:58:09 +08:00
if (is_ftrace_trampoline(addr))
return 1;
if (is_kprobe_optinsn_slot(addr) || is_kprobe_insn_slot(addr))
return 1;
bpf: make jited programs visible in traces Long standing issue with JITed programs is that stack traces from function tracing check whether a given address is kernel code through {__,}kernel_text_address(), which checks for code in core kernel, modules and dynamically allocated ftrace trampolines. But what is still missing is BPF JITed programs (interpreted programs are not an issue as __bpf_prog_run() will be attributed to them), thus when a stack trace is triggered, the code walking the stack won't see any of the JITed ones. The same for address correlation done from user space via reading /proc/kallsyms. This is read by tools like perf, but the latter is also useful for permanent live tracing with eBPF itself in combination with stack maps when other eBPF types are part of the callchain. See offwaketime example on dumping stack from a map. This work tries to tackle that issue by making the addresses and symbols known to the kernel. The lookup from *kernel_text_address() is implemented through a latched RB tree that can be read under RCU in fast-path that is also shared for symbol/size/offset lookup for a specific given address in kallsyms. The slow-path iteration through all symbols in the seq file done via RCU list, which holds a tiny fraction of all exported ksyms, usually below 0.1 percent. Function symbols are exported as bpf_prog_<tag>, in order to aide debugging and attribution. This facility is currently enabled for root-only when bpf_jit_kallsyms is set to 1, and disabled if hardening is active in any mode. The rationale behind this is that still a lot of systems ship with world read permissions on kallsyms thus addresses should not get suddenly exposed for them. If that situation gets much better in future, we always have the option to change the default on this. Likewise, unprivileged programs are not allowed to add entries there either, but that is less of a concern as most such programs types relevant in this context are for root-only anyway. If enabled, call graphs and stack traces will then show a correct attribution; one example is illustrated below, where the trace is now visible in tooling such as perf script --kallsyms=/proc/kallsyms and friends. Before: 7fff8166889d bpf_clone_redirect+0x80007f0020ed (/lib/modules/4.9.0-rc8+/build/vmlinux) f5d80 __sendmsg_nocancel+0xffff006451f1a007 (/usr/lib64/libc-2.18.so) After: 7fff816688b7 bpf_clone_redirect+0x80007f002107 (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fffa0575728 bpf_prog_33c45a467c9e061a+0x8000600020fb (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fffa07ef1fc cls_bpf_classify+0x8000600020dc (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff81678b68 tc_classify+0x80007f002078 (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff8164d40b __netif_receive_skb_core+0x80007f0025fb (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff8164d718 __netif_receive_skb+0x80007f002018 (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff8164e565 process_backlog+0x80007f002095 (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff8164dc71 net_rx_action+0x80007f002231 (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff81767461 __softirqentry_text_start+0x80007f0020d1 (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff817658ac do_softirq_own_stack+0x80007f00201c (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff810a2c20 do_softirq+0x80007f002050 (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff810a2cb5 __local_bh_enable_ip+0x80007f002085 (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff8168d452 ip_finish_output2+0x80007f002152 (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff8168ea3d ip_finish_output+0x80007f00217d (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff8168f2af ip_output+0x80007f00203f (/lib/modules/4.9.0-rc8+/build/vmlinux) [...] 7fff81005854 do_syscall_64+0x80007f002054 (/lib/modules/4.9.0-rc8+/build/vmlinux) 7fff817649eb return_from_SYSCALL_64+0x80007f002000 (/lib/modules/4.9.0-rc8+/build/vmlinux) f5d80 __sendmsg_nocancel+0xffff01c484812007 (/usr/lib64/libc-2.18.so) Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: David S. Miller <davem@davemloft.net>
2017-02-17 05:24:50 +08:00
if (is_bpf_text_address(addr))
return 1;
kprobes, extable: Identify kprobes trampolines as kernel text area Improve __kernel_text_address()/kernel_text_address() to return true if the given address is on a kprobe's instruction slot trampoline. This can help stacktraces to determine the address is on a text area or not. To implement this atomically in is_kprobe_*_slot(), also change the insn_cache page list to an RCU list. This changes timings a bit (it delays page freeing to the RCU garbage collection phase), but none of that is in the hot path. Note: this change can add small overhead to stack unwinders because it adds 2 additional checks to __kernel_text_address(). However, the impact should be very small, because kprobe_insn_pages list has 1 entry per 256 probes(on x86, on arm/arm64 it will be 1024 probes), and kprobe_optinsn_pages has 1 entry per 32 probes(on x86). In most use cases, the number of kprobe events may be less than 20, which means that is_kprobe_*_slot() will check just one entry. Tested-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Acked-by: Peter Zijlstra <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andrey Konovalov <andreyknvl@google.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/148388747896.6869.6354262871751682264.stgit@devbox [ Improved the changelog and coding style. ] Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-01-08 22:58:09 +08:00
return 0;
}
/*
* On some architectures (PPC64, IA64) function pointers
* are actually only tokens to some data that then holds the
* real function address. As a result, to find if a function
* pointer is part of the kernel text, we need to do some
* special dereferencing first.
*/
int func_ptr_is_kernel_text(void *ptr)
{
unsigned long addr;
addr = (unsigned long) dereference_function_descriptor(ptr);
if (core_kernel_text(addr))
return 1;
return is_module_text_address(addr);
}