mirror of
https://github.com/edk2-porting/linux-next.git
synced 2024-12-17 09:43:59 +08:00
b39181f7c6
If an unused weak function was traced, it's call to fentry will still exist, which gets added into the __mcount_loc table. Ftrace will use kallsyms to retrieve the name for each location in __mcount_loc to display it in the available_filter_functions and used to enable functions via the name matching in set_ftrace_filter/notrace. Enabling these functions do nothing but enable an unused call to ftrace_caller. If a traced weak function is overridden, the symbol of the function would be used for it, which will either created duplicate names, or if the previous function was not traced, it would be incorrectly be listed in available_filter_functions as a function that can be traced. This became an issue with BPF[1] as there are tooling that enables the direct callers via ftrace but then checks to see if the functions were actually enabled. The case of one function that was marked notrace, but was followed by an unused weak function that was traced. The unused function's call to fentry was added to the __mcount_loc section, and kallsyms retrieved the untraced function's symbol as the weak function was overridden. Since the untraced function would not get traced, the BPF check would detect this and fail. The real fix would be to fix kallsyms to not show addresses of weak functions as the function before it. But that would require adding code in the build to add function size to kallsyms so that it can know when the function ends instead of just using the start of the next known symbol. In the mean time, this is a work around. Add a FTRACE_MCOUNT_MAX_OFFSET macro that if defined, ftrace will ignore any function that has its call to fentry/mcount that has an offset from the symbol that is greater than FTRACE_MCOUNT_MAX_OFFSET. If CONFIG_HAVE_FENTRY is defined for x86, define FTRACE_MCOUNT_MAX_OFFSET to zero (unless IBT is enabled), which will have ftrace ignore all locations that are not at the start of the function (or one after the ENDBR instruction). A worker thread is added at boot up to scan all the ftrace record entries, and will mark any that fail the FTRACE_MCOUNT_MAX_OFFSET test as disabled. They will still appear in the available_filter_functions file as: __ftrace_invalid_address___<invalid-offset> (showing the offset that caused it to be invalid). This is required for tools that use libtracefs (like trace-cmd does) that scan the available_filter_functions and enable set_ftrace_filter and set_ftrace_notrace using indexes of the function listed in the file (this is a speedup, as enabling thousands of files via names is an O(n^2) operation and can take minutes to complete, where the indexing takes less than a second). The invalid functions cannot be removed from available_filter_functions as the names there correspond to the ftrace records in the array that manages them (and the indexing depends on this). [1] https://lore.kernel.org/all/20220412094923.0abe90955e5db486b7bca279@kernel.org/ Link: https://lkml.kernel.org/r/20220526141912.794c2786@gandalf.local.home Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org> |
||
---|---|---|
.. | ||
blktrace.c | ||
bpf_trace.c | ||
bpf_trace.h | ||
error_report-traces.c | ||
fgraph.c | ||
fprobe.c | ||
ftrace_internal.h | ||
ftrace.c | ||
Kconfig | ||
kprobe_event_gen_test.c | ||
Makefile | ||
pid_list.c | ||
pid_list.h | ||
power-traces.c | ||
preemptirq_delay_test.c | ||
rethook.c | ||
ring_buffer_benchmark.c | ||
ring_buffer.c | ||
rpm-traces.c | ||
synth_event_gen_test.c | ||
trace_benchmark.c | ||
trace_benchmark.h | ||
trace_boot.c | ||
trace_branch.c | ||
trace_clock.c | ||
trace_dynevent.c | ||
trace_dynevent.h | ||
trace_entries.h | ||
trace_eprobe.c | ||
trace_event_perf.c | ||
trace_events_filter_test.h | ||
trace_events_filter.c | ||
trace_events_hist.c | ||
trace_events_inject.c | ||
trace_events_synth.c | ||
trace_events_trigger.c | ||
trace_events_user.c | ||
trace_events.c | ||
trace_export.c | ||
trace_functions_graph.c | ||
trace_functions.c | ||
trace_hwlat.c | ||
trace_irqsoff.c | ||
trace_kdb.c | ||
trace_kprobe_selftest.c | ||
trace_kprobe_selftest.h | ||
trace_kprobe.c | ||
trace_mmiotrace.c | ||
trace_nop.c | ||
trace_osnoise.c | ||
trace_output.c | ||
trace_output.h | ||
trace_preemptirq.c | ||
trace_printk.c | ||
trace_probe_tmpl.h | ||
trace_probe.c | ||
trace_probe.h | ||
trace_recursion_record.c | ||
trace_sched_switch.c | ||
trace_sched_wakeup.c | ||
trace_selftest_dynamic.c | ||
trace_selftest.c | ||
trace_seq.c | ||
trace_stack.c | ||
trace_stat.c | ||
trace_stat.h | ||
trace_synth.h | ||
trace_syscalls.c | ||
trace_uprobe.c | ||
trace.c | ||
trace.h | ||
tracing_map.c | ||
tracing_map.h |