2008-05-13 03:20:42 +08:00
|
|
|
/*
|
|
|
|
* Infrastructure for profiling code inserted by 'gcc -pg'.
|
|
|
|
*
|
|
|
|
* Copyright (C) 2007-2008 Steven Rostedt <srostedt@redhat.com>
|
|
|
|
* Copyright (C) 2004-2008 Ingo Molnar <mingo@redhat.com>
|
|
|
|
*
|
|
|
|
* Originally ported from the -rt patch by:
|
|
|
|
* Copyright (C) 2007 Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
|
|
*
|
|
|
|
* Based on code in the latency_tracer, that is:
|
|
|
|
*
|
|
|
|
* Copyright (C) 2004-2006 Ingo Molnar
|
|
|
|
* Copyright (C) 2004 William Lee Irwin III
|
|
|
|
*/
|
|
|
|
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
#include <linux/stop_machine.h>
|
|
|
|
#include <linux/clocksource.h>
|
|
|
|
#include <linux/kallsyms.h>
|
2008-05-13 03:20:43 +08:00
|
|
|
#include <linux/seq_file.h>
|
2009-01-15 05:33:27 +08:00
|
|
|
#include <linux/suspend.h>
|
2008-05-13 03:20:43 +08:00
|
|
|
#include <linux/debugfs.h>
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
#include <linux/hardirq.h>
|
2008-02-23 23:55:50 +08:00
|
|
|
#include <linux/kthread.h>
|
2008-05-13 03:20:43 +08:00
|
|
|
#include <linux/uaccess.h>
|
2008-06-22 02:20:29 +08:00
|
|
|
#include <linux/kprobes.h>
|
2008-02-23 23:55:50 +08:00
|
|
|
#include <linux/ftrace.h>
|
2008-05-13 03:20:43 +08:00
|
|
|
#include <linux/sysctl.h>
|
2008-05-13 03:20:43 +08:00
|
|
|
#include <linux/ctype.h>
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
#include <linux/list.h>
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
#include <linux/hash.h>
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
|
2009-03-24 13:10:15 +08:00
|
|
|
#include <trace/sched.h>
|
|
|
|
|
2008-06-22 02:17:27 +08:00
|
|
|
#include <asm/ftrace.h>
|
|
|
|
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
#include "trace.h"
|
2008-05-13 03:20:42 +08:00
|
|
|
|
2008-10-23 21:33:03 +08:00
|
|
|
#define FTRACE_WARN_ON(cond) \
|
|
|
|
do { \
|
|
|
|
if (WARN_ON(cond)) \
|
|
|
|
ftrace_kill(); \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define FTRACE_WARN_ON_ONCE(cond) \
|
|
|
|
do { \
|
|
|
|
if (WARN_ON_ONCE(cond)) \
|
|
|
|
ftrace_kill(); \
|
|
|
|
} while (0)
|
|
|
|
|
2009-02-17 04:28:00 +08:00
|
|
|
/* hash bits for specific function selection */
|
|
|
|
#define FTRACE_HASH_BITS 7
|
|
|
|
#define FTRACE_FUNC_HASHSIZE (1 << FTRACE_HASH_BITS)
|
|
|
|
|
2008-05-13 03:20:48 +08:00
|
|
|
/* ftrace_enabled is a method to turn ftrace on or off */
|
|
|
|
int ftrace_enabled __read_mostly;
|
2008-05-13 03:20:43 +08:00
|
|
|
static int last_ftrace_enabled;
|
2008-05-13 03:20:43 +08:00
|
|
|
|
2008-11-06 05:05:44 +08:00
|
|
|
/* Quick disabling of function tracer. */
|
|
|
|
int function_trace_stop;
|
|
|
|
|
2008-05-13 03:20:48 +08:00
|
|
|
/*
|
|
|
|
* ftrace_disabled is set when an anomaly is discovered.
|
|
|
|
* ftrace_disabled is much stronger than ftrace_enabled.
|
|
|
|
*/
|
|
|
|
static int ftrace_disabled __read_mostly;
|
|
|
|
|
2009-02-14 14:15:39 +08:00
|
|
|
static DEFINE_MUTEX(ftrace_lock);
|
2008-05-13 03:20:43 +08:00
|
|
|
|
2008-05-13 03:20:42 +08:00
|
|
|
static struct ftrace_ops ftrace_list_end __read_mostly =
|
|
|
|
{
|
|
|
|
.func = ftrace_stub,
|
|
|
|
};
|
|
|
|
|
|
|
|
static struct ftrace_ops *ftrace_list __read_mostly = &ftrace_list_end;
|
|
|
|
ftrace_func_t ftrace_trace_function __read_mostly = ftrace_stub;
|
2008-11-06 05:05:44 +08:00
|
|
|
ftrace_func_t __ftrace_trace_function __read_mostly = ftrace_stub;
|
2008-11-26 13:16:23 +08:00
|
|
|
ftrace_func_t ftrace_pid_function __read_mostly = ftrace_stub;
|
2008-05-13 03:20:42 +08:00
|
|
|
|
2008-05-22 16:37:48 +08:00
|
|
|
static void ftrace_list_func(unsigned long ip, unsigned long parent_ip)
|
2008-05-13 03:20:42 +08:00
|
|
|
{
|
|
|
|
struct ftrace_ops *op = ftrace_list;
|
|
|
|
|
|
|
|
/* in case someone actually ports this to alpha! */
|
|
|
|
read_barrier_depends();
|
|
|
|
|
|
|
|
while (op != &ftrace_list_end) {
|
|
|
|
/* silly alpha */
|
|
|
|
read_barrier_depends();
|
|
|
|
op->func(ip, parent_ip);
|
|
|
|
op = op->next;
|
|
|
|
};
|
|
|
|
}
|
|
|
|
|
2008-11-26 13:16:23 +08:00
|
|
|
static void ftrace_pid_func(unsigned long ip, unsigned long parent_ip)
|
|
|
|
{
|
2008-12-04 04:36:58 +08:00
|
|
|
if (!test_tsk_trace_trace(current))
|
2008-11-26 13:16:23 +08:00
|
|
|
return;
|
|
|
|
|
|
|
|
ftrace_pid_function(ip, parent_ip);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void set_ftrace_pid_function(ftrace_func_t func)
|
|
|
|
{
|
|
|
|
/* do not set ftrace_pid_function to itself! */
|
|
|
|
if (func != ftrace_pid_func)
|
|
|
|
ftrace_pid_function = func;
|
|
|
|
}
|
|
|
|
|
2008-05-13 03:20:42 +08:00
|
|
|
/**
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
* clear_ftrace_function - reset the ftrace function
|
2008-05-13 03:20:42 +08:00
|
|
|
*
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
* This NULLs the ftrace function and in essence stops
|
|
|
|
* tracing. There may be lag
|
2008-05-13 03:20:42 +08:00
|
|
|
*/
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
void clear_ftrace_function(void)
|
2008-05-13 03:20:42 +08:00
|
|
|
{
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
ftrace_trace_function = ftrace_stub;
|
2008-11-06 05:05:44 +08:00
|
|
|
__ftrace_trace_function = ftrace_stub;
|
2008-11-26 13:16:23 +08:00
|
|
|
ftrace_pid_function = ftrace_stub;
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
}
|
|
|
|
|
2008-11-06 05:05:44 +08:00
|
|
|
#ifndef CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST
|
|
|
|
/*
|
|
|
|
* For those archs that do not test ftrace_trace_stop in their
|
|
|
|
* mcount call site, we need to do it from C.
|
|
|
|
*/
|
|
|
|
static void ftrace_test_stop_func(unsigned long ip, unsigned long parent_ip)
|
|
|
|
{
|
|
|
|
if (function_trace_stop)
|
|
|
|
return;
|
|
|
|
|
|
|
|
__ftrace_trace_function(ip, parent_ip);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2008-05-13 03:20:51 +08:00
|
|
|
static int __register_ftrace_function(struct ftrace_ops *ops)
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
{
|
2008-05-13 03:20:42 +08:00
|
|
|
ops->next = ftrace_list;
|
|
|
|
/*
|
|
|
|
* We are entering ops into the ftrace_list but another
|
|
|
|
* CPU might be walking that list. We need to make sure
|
|
|
|
* the ops->next pointer is valid before another CPU sees
|
|
|
|
* the ops pointer included into the ftrace_list.
|
|
|
|
*/
|
|
|
|
smp_wmb();
|
|
|
|
ftrace_list = ops;
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
|
2008-05-13 03:20:43 +08:00
|
|
|
if (ftrace_enabled) {
|
2008-11-26 13:16:23 +08:00
|
|
|
ftrace_func_t func;
|
|
|
|
|
|
|
|
if (ops->next == &ftrace_list_end)
|
|
|
|
func = ops->func;
|
|
|
|
else
|
|
|
|
func = ftrace_list_func;
|
|
|
|
|
2008-12-04 13:26:40 +08:00
|
|
|
if (ftrace_pid_trace) {
|
2008-11-26 13:16:23 +08:00
|
|
|
set_ftrace_pid_function(func);
|
|
|
|
func = ftrace_pid_func;
|
|
|
|
}
|
|
|
|
|
2008-05-13 03:20:43 +08:00
|
|
|
/*
|
|
|
|
* For one func, simply call it directly.
|
|
|
|
* For more than one func, call the chain.
|
|
|
|
*/
|
2008-11-06 05:05:44 +08:00
|
|
|
#ifdef CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST
|
2008-11-26 13:16:23 +08:00
|
|
|
ftrace_trace_function = func;
|
2008-11-06 05:05:44 +08:00
|
|
|
#else
|
2008-11-26 13:16:23 +08:00
|
|
|
__ftrace_trace_function = func;
|
2008-11-06 05:05:44 +08:00
|
|
|
ftrace_trace_function = ftrace_test_stop_func;
|
|
|
|
#endif
|
2008-05-13 03:20:43 +08:00
|
|
|
}
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
|
2008-05-13 03:20:42 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-05-13 03:20:51 +08:00
|
|
|
static int __unregister_ftrace_function(struct ftrace_ops *ops)
|
2008-05-13 03:20:42 +08:00
|
|
|
{
|
|
|
|
struct ftrace_ops **p;
|
|
|
|
|
|
|
|
/*
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
* If we are removing the last function, then simply point
|
|
|
|
* to the ftrace_stub.
|
2008-05-13 03:20:42 +08:00
|
|
|
*/
|
|
|
|
if (ftrace_list == ops && ops->next == &ftrace_list_end) {
|
|
|
|
ftrace_trace_function = ftrace_stub;
|
|
|
|
ftrace_list = &ftrace_list_end;
|
2009-02-14 14:42:44 +08:00
|
|
|
return 0;
|
2008-05-13 03:20:42 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
for (p = &ftrace_list; *p != &ftrace_list_end; p = &(*p)->next)
|
|
|
|
if (*p == ops)
|
|
|
|
break;
|
|
|
|
|
2009-02-14 14:42:44 +08:00
|
|
|
if (*p != ops)
|
|
|
|
return -1;
|
2008-05-13 03:20:42 +08:00
|
|
|
|
|
|
|
*p = (*p)->next;
|
|
|
|
|
2008-05-13 03:20:43 +08:00
|
|
|
if (ftrace_enabled) {
|
|
|
|
/* If we only have one func left, then call that directly */
|
2008-11-26 13:16:23 +08:00
|
|
|
if (ftrace_list->next == &ftrace_list_end) {
|
|
|
|
ftrace_func_t func = ftrace_list->func;
|
|
|
|
|
2008-12-04 13:26:40 +08:00
|
|
|
if (ftrace_pid_trace) {
|
2008-11-26 13:16:23 +08:00
|
|
|
set_ftrace_pid_function(func);
|
|
|
|
func = ftrace_pid_func;
|
|
|
|
}
|
|
|
|
#ifdef CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST
|
|
|
|
ftrace_trace_function = func;
|
|
|
|
#else
|
|
|
|
__ftrace_trace_function = func;
|
|
|
|
#endif
|
|
|
|
}
|
2008-05-13 03:20:43 +08:00
|
|
|
}
|
2008-05-13 03:20:42 +08:00
|
|
|
|
2009-02-14 14:42:44 +08:00
|
|
|
return 0;
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
}
|
|
|
|
|
2008-11-26 13:16:23 +08:00
|
|
|
static void ftrace_update_pid_func(void)
|
|
|
|
{
|
|
|
|
ftrace_func_t func;
|
|
|
|
|
|
|
|
if (ftrace_trace_function == ftrace_stub)
|
2009-03-06 14:29:04 +08:00
|
|
|
return;
|
2008-11-26 13:16:23 +08:00
|
|
|
|
|
|
|
func = ftrace_trace_function;
|
|
|
|
|
2008-12-04 13:26:40 +08:00
|
|
|
if (ftrace_pid_trace) {
|
2008-11-26 13:16:23 +08:00
|
|
|
set_ftrace_pid_function(func);
|
|
|
|
func = ftrace_pid_func;
|
|
|
|
} else {
|
2008-12-02 10:33:08 +08:00
|
|
|
if (func == ftrace_pid_func)
|
|
|
|
func = ftrace_pid_function;
|
2008-11-26 13:16:23 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST
|
|
|
|
ftrace_trace_function = func;
|
|
|
|
#else
|
|
|
|
__ftrace_trace_function = func;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2009-02-17 18:48:18 +08:00
|
|
|
/* set when tracing only a pid */
|
|
|
|
struct pid *ftrace_pid_trace;
|
|
|
|
static struct pid * const ftrace_swapper_pid = &init_struct_pid;
|
|
|
|
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
#ifdef CONFIG_DYNAMIC_FTRACE
|
2009-02-17 18:48:18 +08:00
|
|
|
|
2008-08-16 09:40:05 +08:00
|
|
|
#ifndef CONFIG_FTRACE_MCOUNT_RECORD
|
2008-10-23 21:33:05 +08:00
|
|
|
# error Dynamic ftrace depends on MCOUNT_RECORD
|
2008-08-16 09:40:05 +08:00
|
|
|
#endif
|
|
|
|
|
2009-02-17 04:28:00 +08:00
|
|
|
static struct hlist_head ftrace_func_hash[FTRACE_FUNC_HASHSIZE] __read_mostly;
|
|
|
|
|
2009-02-18 01:32:04 +08:00
|
|
|
struct ftrace_func_probe {
|
2009-02-17 04:28:00 +08:00
|
|
|
struct hlist_node node;
|
2009-02-18 01:32:04 +08:00
|
|
|
struct ftrace_probe_ops *ops;
|
2009-02-17 04:28:00 +08:00
|
|
|
unsigned long flags;
|
|
|
|
unsigned long ip;
|
|
|
|
void *data;
|
|
|
|
struct rcu_head rcu;
|
|
|
|
};
|
|
|
|
|
|
|
|
|
2008-05-13 03:20:43 +08:00
|
|
|
enum {
|
|
|
|
FTRACE_ENABLE_CALLS = (1 << 0),
|
|
|
|
FTRACE_DISABLE_CALLS = (1 << 1),
|
|
|
|
FTRACE_UPDATE_TRACE_FUNC = (1 << 2),
|
|
|
|
FTRACE_ENABLE_MCOUNT = (1 << 3),
|
|
|
|
FTRACE_DISABLE_MCOUNT = (1 << 4),
|
2008-11-26 13:16:24 +08:00
|
|
|
FTRACE_START_FUNC_RET = (1 << 5),
|
|
|
|
FTRACE_STOP_FUNC_RET = (1 << 6),
|
2008-05-13 03:20:43 +08:00
|
|
|
};
|
|
|
|
|
2008-05-13 03:20:43 +08:00
|
|
|
static int ftrace_filtered;
|
|
|
|
|
2009-03-13 17:51:27 +08:00
|
|
|
static struct dyn_ftrace *ftrace_new_addrs;
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
|
2008-05-22 23:46:33 +08:00
|
|
|
static DEFINE_MUTEX(ftrace_regex_lock);
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
|
2008-05-13 03:20:43 +08:00
|
|
|
struct ftrace_page {
|
|
|
|
struct ftrace_page *next;
|
2009-01-07 01:43:01 +08:00
|
|
|
int index;
|
2008-05-13 03:20:43 +08:00
|
|
|
struct dyn_ftrace records[];
|
2008-05-14 13:06:56 +08:00
|
|
|
};
|
2008-05-13 03:20:43 +08:00
|
|
|
|
|
|
|
#define ENTRIES_PER_PAGE \
|
|
|
|
((PAGE_SIZE - sizeof(struct ftrace_page)) / sizeof(struct dyn_ftrace))
|
|
|
|
|
|
|
|
/* estimate from running different kernels */
|
|
|
|
#define NR_TO_INIT 10000
|
|
|
|
|
|
|
|
static struct ftrace_page *ftrace_pages_start;
|
|
|
|
static struct ftrace_page *ftrace_pages;
|
|
|
|
|
2008-05-13 03:20:48 +08:00
|
|
|
static struct dyn_ftrace *ftrace_free_records;
|
|
|
|
|
2009-02-14 01:43:56 +08:00
|
|
|
/*
|
|
|
|
* This is a double for. Do not use 'break' to break out of the loop,
|
|
|
|
* you must use a goto.
|
|
|
|
*/
|
|
|
|
#define do_for_each_ftrace_rec(pg, rec) \
|
|
|
|
for (pg = ftrace_pages_start; pg; pg = pg->next) { \
|
|
|
|
int _____i; \
|
|
|
|
for (_____i = 0; _____i < pg->index; _____i++) { \
|
|
|
|
rec = &pg->records[_____i];
|
|
|
|
|
|
|
|
#define while_for_each_ftrace_rec() \
|
|
|
|
} \
|
|
|
|
}
|
2008-06-22 02:17:53 +08:00
|
|
|
|
|
|
|
#ifdef CONFIG_KPROBES
|
2008-10-24 18:47:10 +08:00
|
|
|
|
|
|
|
static int frozen_record_count;
|
|
|
|
|
2008-06-22 02:17:53 +08:00
|
|
|
static inline void freeze_record(struct dyn_ftrace *rec)
|
|
|
|
{
|
|
|
|
if (!(rec->flags & FTRACE_FL_FROZEN)) {
|
|
|
|
rec->flags |= FTRACE_FL_FROZEN;
|
|
|
|
frozen_record_count++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void unfreeze_record(struct dyn_ftrace *rec)
|
|
|
|
{
|
|
|
|
if (rec->flags & FTRACE_FL_FROZEN) {
|
|
|
|
rec->flags &= ~FTRACE_FL_FROZEN;
|
|
|
|
frozen_record_count--;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int record_frozen(struct dyn_ftrace *rec)
|
|
|
|
{
|
|
|
|
return rec->flags & FTRACE_FL_FROZEN;
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
# define freeze_record(rec) ({ 0; })
|
|
|
|
# define unfreeze_record(rec) ({ 0; })
|
|
|
|
# define record_frozen(rec) ({ 0; })
|
|
|
|
#endif /* CONFIG_KPROBES */
|
|
|
|
|
2008-05-13 03:20:51 +08:00
|
|
|
static void ftrace_free_rec(struct dyn_ftrace *rec)
|
2008-05-13 03:20:48 +08:00
|
|
|
{
|
2009-03-24 13:38:06 +08:00
|
|
|
rec->freelist = ftrace_free_records;
|
2008-05-13 03:20:48 +08:00
|
|
|
ftrace_free_records = rec;
|
|
|
|
rec->flags |= FTRACE_FL_FREE;
|
|
|
|
}
|
|
|
|
|
2008-08-15 10:47:19 +08:00
|
|
|
void ftrace_release(void *start, unsigned long size)
|
|
|
|
{
|
|
|
|
struct dyn_ftrace *rec;
|
|
|
|
struct ftrace_page *pg;
|
|
|
|
unsigned long s = (unsigned long)start;
|
|
|
|
unsigned long e = s + size;
|
|
|
|
|
2008-08-16 09:40:04 +08:00
|
|
|
if (ftrace_disabled || !start)
|
2008-08-15 10:47:19 +08:00
|
|
|
return;
|
|
|
|
|
2009-02-14 14:15:39 +08:00
|
|
|
mutex_lock(&ftrace_lock);
|
2009-02-14 01:43:56 +08:00
|
|
|
do_for_each_ftrace_rec(pg, rec) {
|
2009-03-25 12:06:05 +08:00
|
|
|
if ((rec->ip >= s) && (rec->ip < e)) {
|
|
|
|
/*
|
|
|
|
* rec->ip is changed in ftrace_free_rec()
|
|
|
|
* It should not between s and e if record was freed.
|
|
|
|
*/
|
|
|
|
FTRACE_WARN_ON(rec->flags & FTRACE_FL_FREE);
|
2009-02-14 01:43:56 +08:00
|
|
|
ftrace_free_rec(rec);
|
2009-03-25 12:06:05 +08:00
|
|
|
}
|
2009-02-14 01:43:56 +08:00
|
|
|
} while_for_each_ftrace_rec();
|
2009-02-14 14:15:39 +08:00
|
|
|
mutex_unlock(&ftrace_lock);
|
2008-08-15 10:47:19 +08:00
|
|
|
}
|
|
|
|
|
2008-05-13 03:20:51 +08:00
|
|
|
static struct dyn_ftrace *ftrace_alloc_dyn_node(unsigned long ip)
|
2008-05-13 03:20:43 +08:00
|
|
|
{
|
2008-05-13 03:20:48 +08:00
|
|
|
struct dyn_ftrace *rec;
|
|
|
|
|
|
|
|
/* First check for freed records */
|
|
|
|
if (ftrace_free_records) {
|
|
|
|
rec = ftrace_free_records;
|
|
|
|
|
|
|
|
if (unlikely(!(rec->flags & FTRACE_FL_FREE))) {
|
2008-10-23 21:33:03 +08:00
|
|
|
FTRACE_WARN_ON_ONCE(1);
|
2008-05-13 03:20:48 +08:00
|
|
|
ftrace_free_records = NULL;
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2009-03-24 13:38:06 +08:00
|
|
|
ftrace_free_records = rec->freelist;
|
2008-05-13 03:20:48 +08:00
|
|
|
memset(rec, 0, sizeof(*rec));
|
|
|
|
return rec;
|
|
|
|
}
|
|
|
|
|
2008-05-13 03:20:43 +08:00
|
|
|
if (ftrace_pages->index == ENTRIES_PER_PAGE) {
|
2008-10-23 21:33:07 +08:00
|
|
|
if (!ftrace_pages->next) {
|
|
|
|
/* allocate another page */
|
|
|
|
ftrace_pages->next =
|
|
|
|
(void *)get_zeroed_page(GFP_KERNEL);
|
|
|
|
if (!ftrace_pages->next)
|
|
|
|
return NULL;
|
|
|
|
}
|
2008-05-13 03:20:43 +08:00
|
|
|
ftrace_pages = ftrace_pages->next;
|
|
|
|
}
|
|
|
|
|
|
|
|
return &ftrace_pages->records[ftrace_pages->index++];
|
|
|
|
}
|
|
|
|
|
2008-10-23 21:33:07 +08:00
|
|
|
static struct dyn_ftrace *
|
2008-05-13 03:20:43 +08:00
|
|
|
ftrace_record_ip(unsigned long ip)
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
{
|
2008-10-23 21:33:07 +08:00
|
|
|
struct dyn_ftrace *rec;
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
|
2008-11-15 08:21:19 +08:00
|
|
|
if (ftrace_disabled)
|
2008-10-23 21:33:07 +08:00
|
|
|
return NULL;
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
|
2008-10-23 21:33:07 +08:00
|
|
|
rec = ftrace_alloc_dyn_node(ip);
|
|
|
|
if (!rec)
|
|
|
|
return NULL;
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
|
2008-10-23 21:33:07 +08:00
|
|
|
rec->ip = ip;
|
2009-03-24 13:38:06 +08:00
|
|
|
rec->newlist = ftrace_new_addrs;
|
2009-03-13 17:51:27 +08:00
|
|
|
ftrace_new_addrs = rec;
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
|
2008-10-23 21:33:07 +08:00
|
|
|
return rec;
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
}
|
|
|
|
|
2008-11-15 08:21:19 +08:00
|
|
|
static void print_ip_ins(const char *fmt, unsigned char *p)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
printk(KERN_CONT "%s", fmt);
|
|
|
|
|
|
|
|
for (i = 0; i < MCOUNT_INSN_SIZE; i++)
|
|
|
|
printk(KERN_CONT "%s%02x", i ? ":" : "", p[i]);
|
|
|
|
}
|
|
|
|
|
2008-11-15 08:21:19 +08:00
|
|
|
static void ftrace_bug(int failed, unsigned long ip)
|
2008-11-15 08:21:19 +08:00
|
|
|
{
|
|
|
|
switch (failed) {
|
|
|
|
case -EFAULT:
|
|
|
|
FTRACE_WARN_ON_ONCE(1);
|
|
|
|
pr_info("ftrace faulted on modifying ");
|
|
|
|
print_ip_sym(ip);
|
|
|
|
break;
|
|
|
|
case -EINVAL:
|
|
|
|
FTRACE_WARN_ON_ONCE(1);
|
|
|
|
pr_info("ftrace failed to modify ");
|
|
|
|
print_ip_sym(ip);
|
|
|
|
print_ip_ins(" actual: ", (unsigned char *)ip);
|
|
|
|
printk(KERN_CONT "\n");
|
|
|
|
break;
|
|
|
|
case -EPERM:
|
|
|
|
FTRACE_WARN_ON_ONCE(1);
|
|
|
|
pr_info("ftrace faulted on writing ");
|
|
|
|
print_ip_sym(ip);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
FTRACE_WARN_ON_ONCE(1);
|
|
|
|
pr_info("ftrace faulted on unknown error ");
|
|
|
|
print_ip_sym(ip);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2008-05-13 03:20:43 +08:00
|
|
|
|
2008-06-02 00:17:30 +08:00
|
|
|
static int
|
2008-11-15 08:21:19 +08:00
|
|
|
__ftrace_replace_code(struct dyn_ftrace *rec, int enable)
|
2008-05-13 03:20:43 +08:00
|
|
|
{
|
2008-11-16 13:02:06 +08:00
|
|
|
unsigned long ftrace_addr;
|
2009-02-18 00:20:26 +08:00
|
|
|
unsigned long ip, fl;
|
2008-11-16 13:02:06 +08:00
|
|
|
|
2009-01-09 11:29:42 +08:00
|
|
|
ftrace_addr = (unsigned long)FTRACE_ADDR;
|
2008-05-13 03:20:43 +08:00
|
|
|
|
|
|
|
ip = rec->ip;
|
|
|
|
|
2008-11-16 05:31:41 +08:00
|
|
|
/*
|
|
|
|
* If this record is not to be traced and
|
|
|
|
* it is not enabled then do nothing.
|
|
|
|
*
|
|
|
|
* If this record is not to be traced and
|
2009-02-06 17:33:27 +08:00
|
|
|
* it is enabled then disable it.
|
2008-11-16 05:31:41 +08:00
|
|
|
*
|
|
|
|
*/
|
|
|
|
if (rec->flags & FTRACE_FL_NOTRACE) {
|
|
|
|
if (rec->flags & FTRACE_FL_ENABLED)
|
|
|
|
rec->flags &= ~FTRACE_FL_ENABLED;
|
|
|
|
else
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
} else if (ftrace_filtered && enable) {
|
2008-05-13 03:20:43 +08:00
|
|
|
/*
|
2008-11-16 05:31:41 +08:00
|
|
|
* Filtering is on:
|
2008-05-13 03:20:43 +08:00
|
|
|
*/
|
2008-06-14 14:29:39 +08:00
|
|
|
|
2008-11-16 05:31:41 +08:00
|
|
|
fl = rec->flags & (FTRACE_FL_FILTER | FTRACE_FL_ENABLED);
|
2008-05-13 03:20:43 +08:00
|
|
|
|
2008-11-16 05:31:41 +08:00
|
|
|
/* Record is filtered and enabled, do nothing */
|
|
|
|
if (fl == (FTRACE_FL_FILTER | FTRACE_FL_ENABLED))
|
2008-06-02 00:17:30 +08:00
|
|
|
return 0;
|
2008-05-13 03:20:43 +08:00
|
|
|
|
2009-02-06 17:33:27 +08:00
|
|
|
/* Record is not filtered or enabled, do nothing */
|
2008-11-16 05:31:41 +08:00
|
|
|
if (!fl)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
/* Record is not filtered but enabled, disable it */
|
|
|
|
if (fl == FTRACE_FL_ENABLED)
|
2008-05-13 03:20:43 +08:00
|
|
|
rec->flags &= ~FTRACE_FL_ENABLED;
|
2008-11-16 05:31:41 +08:00
|
|
|
else
|
|
|
|
/* Otherwise record is filtered but not enabled, enable it */
|
2008-05-13 03:20:43 +08:00
|
|
|
rec->flags |= FTRACE_FL_ENABLED;
|
|
|
|
} else {
|
2008-11-16 05:31:41 +08:00
|
|
|
/* Disable or not filtered */
|
2008-05-13 03:20:43 +08:00
|
|
|
|
2008-05-22 23:46:33 +08:00
|
|
|
if (enable) {
|
2008-11-16 05:31:41 +08:00
|
|
|
/* if record is enabled, do nothing */
|
2008-05-13 03:20:43 +08:00
|
|
|
if (rec->flags & FTRACE_FL_ENABLED)
|
2008-06-02 00:17:30 +08:00
|
|
|
return 0;
|
2008-11-16 05:31:41 +08:00
|
|
|
|
2008-05-13 03:20:43 +08:00
|
|
|
rec->flags |= FTRACE_FL_ENABLED;
|
2008-11-16 05:31:41 +08:00
|
|
|
|
2008-05-13 03:20:43 +08:00
|
|
|
} else {
|
2008-11-16 05:31:41 +08:00
|
|
|
|
2009-02-06 17:33:27 +08:00
|
|
|
/* if record is not enabled, do nothing */
|
2008-05-13 03:20:43 +08:00
|
|
|
if (!(rec->flags & FTRACE_FL_ENABLED))
|
2008-06-02 00:17:30 +08:00
|
|
|
return 0;
|
2008-11-16 05:31:41 +08:00
|
|
|
|
2008-05-13 03:20:43 +08:00
|
|
|
rec->flags &= ~FTRACE_FL_ENABLED;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2008-11-16 05:31:41 +08:00
|
|
|
if (rec->flags & FTRACE_FL_ENABLED)
|
2008-11-16 13:02:06 +08:00
|
|
|
return ftrace_make_call(rec, ftrace_addr);
|
2008-11-15 08:21:19 +08:00
|
|
|
else
|
2008-11-16 13:02:06 +08:00
|
|
|
return ftrace_make_nop(NULL, rec, ftrace_addr);
|
2008-05-13 03:20:43 +08:00
|
|
|
}
|
|
|
|
|
2008-05-13 03:20:51 +08:00
|
|
|
static void ftrace_replace_code(int enable)
|
2008-05-13 03:20:43 +08:00
|
|
|
{
|
|
|
|
struct dyn_ftrace *rec;
|
|
|
|
struct ftrace_page *pg;
|
2009-02-18 00:20:26 +08:00
|
|
|
int failed;
|
2008-05-13 03:20:43 +08:00
|
|
|
|
2009-02-14 01:43:56 +08:00
|
|
|
do_for_each_ftrace_rec(pg, rec) {
|
|
|
|
/*
|
2009-03-13 17:16:34 +08:00
|
|
|
* Skip over free records, records that have
|
|
|
|
* failed and not converted.
|
2009-02-14 01:43:56 +08:00
|
|
|
*/
|
|
|
|
if (rec->flags & FTRACE_FL_FREE ||
|
2009-03-13 17:16:34 +08:00
|
|
|
rec->flags & FTRACE_FL_FAILED ||
|
2009-03-17 05:41:00 +08:00
|
|
|
!(rec->flags & FTRACE_FL_CONVERTED))
|
2009-02-14 01:43:56 +08:00
|
|
|
continue;
|
|
|
|
|
|
|
|
/* ignore updates to this record's mcount site */
|
|
|
|
if (get_kprobe((void *)rec->ip)) {
|
|
|
|
freeze_record(rec);
|
|
|
|
continue;
|
|
|
|
} else {
|
|
|
|
unfreeze_record(rec);
|
|
|
|
}
|
2008-06-22 02:20:29 +08:00
|
|
|
|
2009-02-14 01:43:56 +08:00
|
|
|
failed = __ftrace_replace_code(rec, enable);
|
2009-03-13 17:16:34 +08:00
|
|
|
if (failed) {
|
2009-02-14 01:43:56 +08:00
|
|
|
rec->flags |= FTRACE_FL_FAILED;
|
|
|
|
if ((system_state == SYSTEM_BOOTING) ||
|
|
|
|
!core_kernel_text(rec->ip)) {
|
|
|
|
ftrace_free_rec(rec);
|
2009-02-20 02:41:27 +08:00
|
|
|
} else {
|
2009-02-14 01:43:56 +08:00
|
|
|
ftrace_bug(failed, rec->ip);
|
2009-02-20 02:41:27 +08:00
|
|
|
/* Stop processing */
|
|
|
|
return;
|
|
|
|
}
|
2008-05-13 03:20:43 +08:00
|
|
|
}
|
2009-02-14 01:43:56 +08:00
|
|
|
} while_for_each_ftrace_rec();
|
2008-05-13 03:20:43 +08:00
|
|
|
}
|
|
|
|
|
2008-05-25 02:40:04 +08:00
|
|
|
static int
|
2008-11-15 08:21:19 +08:00
|
|
|
ftrace_code_disable(struct module *mod, struct dyn_ftrace *rec)
|
2008-05-13 03:20:43 +08:00
|
|
|
{
|
|
|
|
unsigned long ip;
|
2008-10-23 21:32:59 +08:00
|
|
|
int ret;
|
2008-05-13 03:20:43 +08:00
|
|
|
|
|
|
|
ip = rec->ip;
|
|
|
|
|
2009-01-09 11:29:40 +08:00
|
|
|
ret = ftrace_make_nop(mod, rec, MCOUNT_ADDR);
|
2008-10-23 21:32:59 +08:00
|
|
|
if (ret) {
|
2008-11-15 08:21:19 +08:00
|
|
|
ftrace_bug(ret, ip);
|
2008-05-13 03:20:43 +08:00
|
|
|
rec->flags |= FTRACE_FL_FAILED;
|
2008-05-25 02:40:04 +08:00
|
|
|
return 0;
|
2008-05-13 03:20:48 +08:00
|
|
|
}
|
2008-05-25 02:40:04 +08:00
|
|
|
return 1;
|
2008-05-13 03:20:43 +08:00
|
|
|
}
|
|
|
|
|
2009-02-18 02:35:06 +08:00
|
|
|
/*
|
|
|
|
* archs can override this function if they must do something
|
|
|
|
* before the modifying code is performed.
|
|
|
|
*/
|
|
|
|
int __weak ftrace_arch_code_modify_prepare(void)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* archs can override this function if they must do something
|
|
|
|
* after the modifying code is performed.
|
|
|
|
*/
|
|
|
|
int __weak ftrace_arch_code_modify_post_process(void)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-05-13 03:20:51 +08:00
|
|
|
static int __ftrace_modify_code(void *data)
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
{
|
2008-05-13 03:20:43 +08:00
|
|
|
int *command = data;
|
|
|
|
|
2008-11-12 04:01:42 +08:00
|
|
|
if (*command & FTRACE_ENABLE_CALLS)
|
2008-05-13 03:20:43 +08:00
|
|
|
ftrace_replace_code(1);
|
2008-11-12 04:01:42 +08:00
|
|
|
else if (*command & FTRACE_DISABLE_CALLS)
|
2008-05-13 03:20:43 +08:00
|
|
|
ftrace_replace_code(0);
|
|
|
|
|
|
|
|
if (*command & FTRACE_UPDATE_TRACE_FUNC)
|
|
|
|
ftrace_update_ftrace_func(ftrace_trace_function);
|
|
|
|
|
2008-11-26 13:16:24 +08:00
|
|
|
if (*command & FTRACE_START_FUNC_RET)
|
|
|
|
ftrace_enable_ftrace_graph_caller();
|
|
|
|
else if (*command & FTRACE_STOP_FUNC_RET)
|
|
|
|
ftrace_disable_ftrace_graph_caller();
|
|
|
|
|
2008-05-13 03:20:43 +08:00
|
|
|
return 0;
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
}
|
|
|
|
|
2008-05-13 03:20:51 +08:00
|
|
|
static void ftrace_run_update_code(int command)
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
{
|
2009-02-18 02:35:06 +08:00
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = ftrace_arch_code_modify_prepare();
|
|
|
|
FTRACE_WARN_ON(ret);
|
|
|
|
if (ret)
|
|
|
|
return;
|
|
|
|
|
2008-07-29 01:16:31 +08:00
|
|
|
stop_machine(__ftrace_modify_code, &command, NULL);
|
2009-02-18 02:35:06 +08:00
|
|
|
|
|
|
|
ret = ftrace_arch_code_modify_post_process();
|
|
|
|
FTRACE_WARN_ON(ret);
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
}
|
|
|
|
|
2008-05-13 03:20:43 +08:00
|
|
|
static ftrace_func_t saved_ftrace_func;
|
2008-11-06 05:05:44 +08:00
|
|
|
static int ftrace_start_up;
|
2008-11-26 13:16:23 +08:00
|
|
|
|
|
|
|
static void ftrace_startup_enable(int command)
|
|
|
|
{
|
|
|
|
if (saved_ftrace_func != ftrace_trace_function) {
|
|
|
|
saved_ftrace_func = ftrace_trace_function;
|
|
|
|
command |= FTRACE_UPDATE_TRACE_FUNC;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!command || !ftrace_enabled)
|
|
|
|
return;
|
|
|
|
|
|
|
|
ftrace_run_update_code(command);
|
|
|
|
}
|
2008-05-13 03:20:43 +08:00
|
|
|
|
2008-11-26 13:16:24 +08:00
|
|
|
static void ftrace_startup(int command)
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
{
|
2008-05-13 03:20:48 +08:00
|
|
|
if (unlikely(ftrace_disabled))
|
|
|
|
return;
|
|
|
|
|
2008-11-06 05:05:44 +08:00
|
|
|
ftrace_start_up++;
|
2008-11-16 05:31:41 +08:00
|
|
|
command |= FTRACE_ENABLE_CALLS;
|
2008-05-13 03:20:43 +08:00
|
|
|
|
2008-11-26 13:16:23 +08:00
|
|
|
ftrace_startup_enable(command);
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
}
|
|
|
|
|
2008-11-26 13:16:24 +08:00
|
|
|
static void ftrace_shutdown(int command)
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
{
|
2008-05-13 03:20:48 +08:00
|
|
|
if (unlikely(ftrace_disabled))
|
|
|
|
return;
|
|
|
|
|
2008-11-06 05:05:44 +08:00
|
|
|
ftrace_start_up--;
|
|
|
|
if (!ftrace_start_up)
|
2008-05-13 03:20:43 +08:00
|
|
|
command |= FTRACE_DISABLE_CALLS;
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
|
2008-05-13 03:20:43 +08:00
|
|
|
if (saved_ftrace_func != ftrace_trace_function) {
|
|
|
|
saved_ftrace_func = ftrace_trace_function;
|
|
|
|
command |= FTRACE_UPDATE_TRACE_FUNC;
|
|
|
|
}
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
|
2008-05-13 03:20:43 +08:00
|
|
|
if (!command || !ftrace_enabled)
|
2009-02-14 14:42:44 +08:00
|
|
|
return;
|
2008-05-13 03:20:43 +08:00
|
|
|
|
|
|
|
ftrace_run_update_code(command);
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
}
|
|
|
|
|
2008-05-13 03:20:51 +08:00
|
|
|
static void ftrace_startup_sysctl(void)
|
2008-05-13 03:20:43 +08:00
|
|
|
{
|
2008-05-13 03:20:43 +08:00
|
|
|
int command = FTRACE_ENABLE_MCOUNT;
|
|
|
|
|
2008-05-13 03:20:48 +08:00
|
|
|
if (unlikely(ftrace_disabled))
|
|
|
|
return;
|
|
|
|
|
2008-05-13 03:20:43 +08:00
|
|
|
/* Force update next time */
|
|
|
|
saved_ftrace_func = NULL;
|
2008-11-06 05:05:44 +08:00
|
|
|
/* ftrace_start_up is true if we want ftrace running */
|
|
|
|
if (ftrace_start_up)
|
2008-05-13 03:20:43 +08:00
|
|
|
command |= FTRACE_ENABLE_CALLS;
|
|
|
|
|
|
|
|
ftrace_run_update_code(command);
|
2008-05-13 03:20:43 +08:00
|
|
|
}
|
|
|
|
|
2008-05-13 03:20:51 +08:00
|
|
|
static void ftrace_shutdown_sysctl(void)
|
2008-05-13 03:20:43 +08:00
|
|
|
{
|
2008-05-13 03:20:43 +08:00
|
|
|
int command = FTRACE_DISABLE_MCOUNT;
|
|
|
|
|
2008-05-13 03:20:48 +08:00
|
|
|
if (unlikely(ftrace_disabled))
|
|
|
|
return;
|
|
|
|
|
2008-11-06 05:05:44 +08:00
|
|
|
/* ftrace_start_up is true if ftrace is running */
|
|
|
|
if (ftrace_start_up)
|
2008-05-13 03:20:43 +08:00
|
|
|
command |= FTRACE_DISABLE_CALLS;
|
|
|
|
|
|
|
|
ftrace_run_update_code(command);
|
2008-05-13 03:20:43 +08:00
|
|
|
}
|
|
|
|
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
static cycle_t ftrace_update_time;
|
|
|
|
static unsigned long ftrace_update_cnt;
|
|
|
|
unsigned long ftrace_update_tot_cnt;
|
|
|
|
|
2008-11-15 08:21:19 +08:00
|
|
|
static int ftrace_update_code(struct module *mod)
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
{
|
2009-03-13 17:51:27 +08:00
|
|
|
struct dyn_ftrace *p;
|
2008-06-22 02:20:29 +08:00
|
|
|
cycle_t start, stop;
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
|
2008-05-13 03:20:46 +08:00
|
|
|
start = ftrace_now(raw_smp_processor_id());
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
ftrace_update_cnt = 0;
|
|
|
|
|
2009-03-13 17:51:27 +08:00
|
|
|
while (ftrace_new_addrs) {
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
|
2008-10-23 21:33:07 +08:00
|
|
|
/* If something went wrong, bail without enabling anything */
|
|
|
|
if (unlikely(ftrace_disabled))
|
|
|
|
return -1;
|
2008-06-22 02:20:29 +08:00
|
|
|
|
2009-03-13 17:51:27 +08:00
|
|
|
p = ftrace_new_addrs;
|
2009-03-24 13:38:06 +08:00
|
|
|
ftrace_new_addrs = p->newlist;
|
2009-03-13 17:51:27 +08:00
|
|
|
p->flags = 0L;
|
2008-06-22 02:20:29 +08:00
|
|
|
|
2008-10-23 21:33:07 +08:00
|
|
|
/* convert record (i.e, patch mcount-call with NOP) */
|
2008-11-15 08:21:19 +08:00
|
|
|
if (ftrace_code_disable(mod, p)) {
|
2008-10-23 21:33:07 +08:00
|
|
|
p->flags |= FTRACE_FL_CONVERTED;
|
|
|
|
ftrace_update_cnt++;
|
|
|
|
} else
|
|
|
|
ftrace_free_rec(p);
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
}
|
|
|
|
|
2008-05-13 03:20:46 +08:00
|
|
|
stop = ftrace_now(raw_smp_processor_id());
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
ftrace_update_time = stop - start;
|
|
|
|
ftrace_update_tot_cnt += ftrace_update_cnt;
|
|
|
|
|
2008-05-13 03:20:42 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-08-15 03:45:08 +08:00
|
|
|
static int __init ftrace_dyn_table_alloc(unsigned long num_to_init)
|
2008-05-13 03:20:43 +08:00
|
|
|
{
|
|
|
|
struct ftrace_page *pg;
|
|
|
|
int cnt;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
/* allocate a few pages */
|
|
|
|
ftrace_pages_start = (void *)get_zeroed_page(GFP_KERNEL);
|
|
|
|
if (!ftrace_pages_start)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Allocate a few more pages.
|
|
|
|
*
|
|
|
|
* TODO: have some parser search vmlinux before
|
|
|
|
* final linking to find all calls to ftrace.
|
|
|
|
* Then we can:
|
|
|
|
* a) know how many pages to allocate.
|
|
|
|
* and/or
|
|
|
|
* b) set up the table then.
|
|
|
|
*
|
|
|
|
* The dynamic code is still necessary for
|
|
|
|
* modules.
|
|
|
|
*/
|
|
|
|
|
|
|
|
pg = ftrace_pages = ftrace_pages_start;
|
|
|
|
|
2008-08-15 03:45:08 +08:00
|
|
|
cnt = num_to_init / ENTRIES_PER_PAGE;
|
2008-10-23 21:33:07 +08:00
|
|
|
pr_info("ftrace: allocating %ld entries in %d pages\n",
|
function tracing: fix wrong pos computing when read buffer has been fulfilled
Impact: make output of available_filter_functions complete
phenomenon:
The first value of dyn_ftrace_total_info is not equal with
`cat available_filter_functions | wc -l`, but they should be equal.
root cause:
When printing functions with seq_printf in t_show, if the read buffer
is just overflowed by current function record, then this function
won't be printed to user space through read buffer, it will
just be dropped. So we can't see this function printing.
So, every time the last function to fill the read buffer, if overflowed,
will be dropped.
This also applies to set_ftrace_filter if set_ftrace_filter has
more bytes than read buffer.
fix:
Through checking return value of seq_printf, if less than 0, we know
this function doesn't be printed. Then we decrease position to force
this function to be printed next time, in next read buffer.
Another little fix is to show correct allocating pages count.
Signed-off-by: walimis <walimisdev@gmail.com>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-11-15 15:19:06 +08:00
|
|
|
num_to_init, cnt + 1);
|
2008-05-13 03:20:43 +08:00
|
|
|
|
|
|
|
for (i = 0; i < cnt; i++) {
|
|
|
|
pg->next = (void *)get_zeroed_page(GFP_KERNEL);
|
|
|
|
|
|
|
|
/* If we fail, we'll try later anyway */
|
|
|
|
if (!pg->next)
|
|
|
|
break;
|
|
|
|
|
|
|
|
pg = pg->next;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-05-13 03:20:43 +08:00
|
|
|
enum {
|
|
|
|
FTRACE_ITER_FILTER = (1 << 0),
|
|
|
|
FTRACE_ITER_CONT = (1 << 1),
|
2008-05-22 23:46:33 +08:00
|
|
|
FTRACE_ITER_NOTRACE = (1 << 2),
|
2008-06-02 00:17:54 +08:00
|
|
|
FTRACE_ITER_FAILURES = (1 << 3),
|
2009-02-17 00:21:52 +08:00
|
|
|
FTRACE_ITER_PRINTALL = (1 << 4),
|
2009-02-17 04:28:00 +08:00
|
|
|
FTRACE_ITER_HASH = (1 << 5),
|
2008-05-13 03:20:43 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
#define FTRACE_BUFF_MAX (KSYM_SYMBOL_LEN+4) /* room for wildcards */
|
|
|
|
|
|
|
|
struct ftrace_iterator {
|
|
|
|
struct ftrace_page *pg;
|
2009-02-17 04:28:00 +08:00
|
|
|
int hidx;
|
2009-01-07 01:43:01 +08:00
|
|
|
int idx;
|
2008-05-13 03:20:43 +08:00
|
|
|
unsigned flags;
|
|
|
|
unsigned char buffer[FTRACE_BUFF_MAX+1];
|
|
|
|
unsigned buffer_idx;
|
|
|
|
unsigned filtered;
|
|
|
|
};
|
|
|
|
|
2009-02-17 04:28:00 +08:00
|
|
|
static void *
|
|
|
|
t_hash_next(struct seq_file *m, void *v, loff_t *pos)
|
|
|
|
{
|
|
|
|
struct ftrace_iterator *iter = m->private;
|
|
|
|
struct hlist_node *hnd = v;
|
|
|
|
struct hlist_head *hhd;
|
|
|
|
|
|
|
|
WARN_ON(!(iter->flags & FTRACE_ITER_HASH));
|
|
|
|
|
|
|
|
(*pos)++;
|
|
|
|
|
|
|
|
retry:
|
|
|
|
if (iter->hidx >= FTRACE_FUNC_HASHSIZE)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
hhd = &ftrace_func_hash[iter->hidx];
|
|
|
|
|
|
|
|
if (hlist_empty(hhd)) {
|
|
|
|
iter->hidx++;
|
|
|
|
hnd = NULL;
|
|
|
|
goto retry;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!hnd)
|
|
|
|
hnd = hhd->first;
|
|
|
|
else {
|
|
|
|
hnd = hnd->next;
|
|
|
|
if (!hnd) {
|
|
|
|
iter->hidx++;
|
|
|
|
goto retry;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return hnd;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void *t_hash_start(struct seq_file *m, loff_t *pos)
|
|
|
|
{
|
|
|
|
struct ftrace_iterator *iter = m->private;
|
|
|
|
void *p = NULL;
|
|
|
|
|
|
|
|
iter->flags |= FTRACE_ITER_HASH;
|
|
|
|
|
|
|
|
return t_hash_next(m, p, pos);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int t_hash_show(struct seq_file *m, void *v)
|
|
|
|
{
|
2009-02-18 01:32:04 +08:00
|
|
|
struct ftrace_func_probe *rec;
|
2009-02-17 04:28:00 +08:00
|
|
|
struct hlist_node *hnd = v;
|
|
|
|
char str[KSYM_SYMBOL_LEN];
|
|
|
|
|
2009-02-18 01:32:04 +08:00
|
|
|
rec = hlist_entry(hnd, struct ftrace_func_probe, node);
|
2009-02-17 04:28:00 +08:00
|
|
|
|
2009-02-17 12:06:01 +08:00
|
|
|
if (rec->ops->print)
|
|
|
|
return rec->ops->print(m, rec->ip, rec->ops, rec->data);
|
|
|
|
|
2009-02-17 04:28:00 +08:00
|
|
|
kallsyms_lookup(rec->ip, NULL, NULL, NULL, str);
|
|
|
|
seq_printf(m, "%s:", str);
|
|
|
|
|
|
|
|
kallsyms_lookup((unsigned long)rec->ops->func, NULL, NULL, NULL, str);
|
|
|
|
seq_printf(m, "%s", str);
|
|
|
|
|
|
|
|
if (rec->data)
|
|
|
|
seq_printf(m, ":%p", rec->data);
|
|
|
|
seq_putc(m, '\n');
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-05-13 03:20:51 +08:00
|
|
|
static void *
|
2008-05-13 03:20:43 +08:00
|
|
|
t_next(struct seq_file *m, void *v, loff_t *pos)
|
|
|
|
{
|
|
|
|
struct ftrace_iterator *iter = m->private;
|
|
|
|
struct dyn_ftrace *rec = NULL;
|
|
|
|
|
2009-02-17 04:28:00 +08:00
|
|
|
if (iter->flags & FTRACE_ITER_HASH)
|
|
|
|
return t_hash_next(m, v, pos);
|
|
|
|
|
2008-05-13 03:20:43 +08:00
|
|
|
(*pos)++;
|
|
|
|
|
2009-02-17 00:21:52 +08:00
|
|
|
if (iter->flags & FTRACE_ITER_PRINTALL)
|
|
|
|
return NULL;
|
|
|
|
|
2008-05-13 03:20:43 +08:00
|
|
|
retry:
|
|
|
|
if (iter->idx >= iter->pg->index) {
|
|
|
|
if (iter->pg->next) {
|
|
|
|
iter->pg = iter->pg->next;
|
|
|
|
iter->idx = 0;
|
|
|
|
goto retry;
|
2008-11-28 12:13:21 +08:00
|
|
|
} else {
|
|
|
|
iter->idx = -1;
|
2008-05-13 03:20:43 +08:00
|
|
|
}
|
|
|
|
} else {
|
|
|
|
rec = &iter->pg->records[iter->idx++];
|
2008-08-15 10:47:17 +08:00
|
|
|
if ((rec->flags & FTRACE_FL_FREE) ||
|
|
|
|
|
|
|
|
(!(iter->flags & FTRACE_ITER_FAILURES) &&
|
2008-06-02 00:17:54 +08:00
|
|
|
(rec->flags & FTRACE_FL_FAILED)) ||
|
|
|
|
|
|
|
|
((iter->flags & FTRACE_ITER_FAILURES) &&
|
2008-08-15 10:47:17 +08:00
|
|
|
!(rec->flags & FTRACE_FL_FAILED)) ||
|
2008-06-02 00:17:54 +08:00
|
|
|
|
2008-11-08 11:36:02 +08:00
|
|
|
((iter->flags & FTRACE_ITER_FILTER) &&
|
|
|
|
!(rec->flags & FTRACE_FL_FILTER)) ||
|
|
|
|
|
2008-05-22 23:46:33 +08:00
|
|
|
((iter->flags & FTRACE_ITER_NOTRACE) &&
|
|
|
|
!(rec->flags & FTRACE_FL_NOTRACE))) {
|
2008-05-13 03:20:43 +08:00
|
|
|
rec = NULL;
|
|
|
|
goto retry;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return rec;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void *t_start(struct seq_file *m, loff_t *pos)
|
|
|
|
{
|
|
|
|
struct ftrace_iterator *iter = m->private;
|
|
|
|
void *p = NULL;
|
|
|
|
|
2009-02-17 04:28:00 +08:00
|
|
|
mutex_lock(&ftrace_lock);
|
2009-02-17 00:21:52 +08:00
|
|
|
/*
|
|
|
|
* For set_ftrace_filter reading, if we have the filter
|
|
|
|
* off, we can short cut and just print out that all
|
|
|
|
* functions are enabled.
|
|
|
|
*/
|
|
|
|
if (iter->flags & FTRACE_ITER_FILTER && !ftrace_filtered) {
|
|
|
|
if (*pos > 0)
|
2009-02-17 04:28:00 +08:00
|
|
|
return t_hash_start(m, pos);
|
2009-02-17 00:21:52 +08:00
|
|
|
iter->flags |= FTRACE_ITER_PRINTALL;
|
|
|
|
(*pos)++;
|
|
|
|
return iter;
|
|
|
|
}
|
|
|
|
|
2009-02-17 04:28:00 +08:00
|
|
|
if (iter->flags & FTRACE_ITER_HASH)
|
|
|
|
return t_hash_start(m, pos);
|
|
|
|
|
2008-11-28 12:13:21 +08:00
|
|
|
if (*pos > 0) {
|
|
|
|
if (iter->idx < 0)
|
|
|
|
return p;
|
|
|
|
(*pos)--;
|
|
|
|
iter->idx--;
|
|
|
|
}
|
function tracing: fix wrong pos computing when read buffer has been fulfilled
Impact: make output of available_filter_functions complete
phenomenon:
The first value of dyn_ftrace_total_info is not equal with
`cat available_filter_functions | wc -l`, but they should be equal.
root cause:
When printing functions with seq_printf in t_show, if the read buffer
is just overflowed by current function record, then this function
won't be printed to user space through read buffer, it will
just be dropped. So we can't see this function printing.
So, every time the last function to fill the read buffer, if overflowed,
will be dropped.
This also applies to set_ftrace_filter if set_ftrace_filter has
more bytes than read buffer.
fix:
Through checking return value of seq_printf, if less than 0, we know
this function doesn't be printed. Then we decrease position to force
this function to be printed next time, in next read buffer.
Another little fix is to show correct allocating pages count.
Signed-off-by: walimis <walimisdev@gmail.com>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-11-15 15:19:06 +08:00
|
|
|
|
2008-11-28 12:13:21 +08:00
|
|
|
p = t_next(m, p, pos);
|
2008-05-13 03:20:43 +08:00
|
|
|
|
2009-02-17 04:28:00 +08:00
|
|
|
if (!p)
|
|
|
|
return t_hash_start(m, pos);
|
|
|
|
|
2008-05-13 03:20:43 +08:00
|
|
|
return p;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void t_stop(struct seq_file *m, void *p)
|
|
|
|
{
|
2009-02-17 04:28:00 +08:00
|
|
|
mutex_unlock(&ftrace_lock);
|
2008-05-13 03:20:43 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static int t_show(struct seq_file *m, void *v)
|
|
|
|
{
|
2009-02-17 00:21:52 +08:00
|
|
|
struct ftrace_iterator *iter = m->private;
|
2008-05-13 03:20:43 +08:00
|
|
|
struct dyn_ftrace *rec = v;
|
|
|
|
char str[KSYM_SYMBOL_LEN];
|
|
|
|
|
2009-02-17 04:28:00 +08:00
|
|
|
if (iter->flags & FTRACE_ITER_HASH)
|
|
|
|
return t_hash_show(m, v);
|
|
|
|
|
2009-02-17 00:21:52 +08:00
|
|
|
if (iter->flags & FTRACE_ITER_PRINTALL) {
|
|
|
|
seq_printf(m, "#### all functions enabled ####\n");
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-05-13 03:20:43 +08:00
|
|
|
if (!rec)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
kallsyms_lookup(rec->ip, NULL, NULL, NULL, str);
|
|
|
|
|
2008-11-28 12:13:21 +08:00
|
|
|
seq_printf(m, "%s\n", str);
|
2008-05-13 03:20:43 +08:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct seq_operations show_ftrace_seq_ops = {
|
|
|
|
.start = t_start,
|
|
|
|
.next = t_next,
|
|
|
|
.stop = t_stop,
|
|
|
|
.show = t_show,
|
|
|
|
};
|
|
|
|
|
2008-05-13 03:20:51 +08:00
|
|
|
static int
|
2008-05-13 03:20:43 +08:00
|
|
|
ftrace_avail_open(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
struct ftrace_iterator *iter;
|
|
|
|
int ret;
|
|
|
|
|
2008-05-13 03:20:48 +08:00
|
|
|
if (unlikely(ftrace_disabled))
|
|
|
|
return -ENODEV;
|
|
|
|
|
2008-05-13 03:20:43 +08:00
|
|
|
iter = kzalloc(sizeof(*iter), GFP_KERNEL);
|
|
|
|
if (!iter)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
iter->pg = ftrace_pages_start;
|
|
|
|
|
|
|
|
ret = seq_open(file, &show_ftrace_seq_ops);
|
|
|
|
if (!ret) {
|
|
|
|
struct seq_file *m = file->private_data;
|
2008-05-13 03:20:46 +08:00
|
|
|
|
2008-05-13 03:20:43 +08:00
|
|
|
m->private = iter;
|
2008-05-13 03:20:46 +08:00
|
|
|
} else {
|
2008-05-13 03:20:43 +08:00
|
|
|
kfree(iter);
|
2008-05-13 03:20:46 +08:00
|
|
|
}
|
2008-05-13 03:20:43 +08:00
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
int ftrace_avail_release(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
struct seq_file *m = (struct seq_file *)file->private_data;
|
|
|
|
struct ftrace_iterator *iter = m->private;
|
|
|
|
|
|
|
|
seq_release(inode, file);
|
|
|
|
kfree(iter);
|
2008-05-13 03:20:46 +08:00
|
|
|
|
2008-05-13 03:20:43 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-06-02 00:17:54 +08:00
|
|
|
static int
|
|
|
|
ftrace_failures_open(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
struct seq_file *m;
|
|
|
|
struct ftrace_iterator *iter;
|
|
|
|
|
|
|
|
ret = ftrace_avail_open(inode, file);
|
|
|
|
if (!ret) {
|
|
|
|
m = (struct seq_file *)file->private_data;
|
|
|
|
iter = (struct ftrace_iterator *)m->private;
|
|
|
|
iter->flags = FTRACE_ITER_FAILURES;
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2008-05-22 23:46:33 +08:00
|
|
|
static void ftrace_filter_reset(int enable)
|
2008-05-13 03:20:43 +08:00
|
|
|
{
|
|
|
|
struct ftrace_page *pg;
|
|
|
|
struct dyn_ftrace *rec;
|
2008-05-22 23:46:33 +08:00
|
|
|
unsigned long type = enable ? FTRACE_FL_FILTER : FTRACE_FL_NOTRACE;
|
2008-05-13 03:20:43 +08:00
|
|
|
|
2009-02-14 14:15:39 +08:00
|
|
|
mutex_lock(&ftrace_lock);
|
2008-05-22 23:46:33 +08:00
|
|
|
if (enable)
|
|
|
|
ftrace_filtered = 0;
|
2009-02-14 01:43:56 +08:00
|
|
|
do_for_each_ftrace_rec(pg, rec) {
|
|
|
|
if (rec->flags & FTRACE_FL_FAILED)
|
|
|
|
continue;
|
|
|
|
rec->flags &= ~type;
|
|
|
|
} while_for_each_ftrace_rec();
|
2009-02-14 14:15:39 +08:00
|
|
|
mutex_unlock(&ftrace_lock);
|
2008-05-13 03:20:43 +08:00
|
|
|
}
|
|
|
|
|
2008-05-13 03:20:51 +08:00
|
|
|
static int
|
2008-05-22 23:46:33 +08:00
|
|
|
ftrace_regex_open(struct inode *inode, struct file *file, int enable)
|
2008-05-13 03:20:43 +08:00
|
|
|
{
|
|
|
|
struct ftrace_iterator *iter;
|
|
|
|
int ret = 0;
|
|
|
|
|
2008-05-13 03:20:48 +08:00
|
|
|
if (unlikely(ftrace_disabled))
|
|
|
|
return -ENODEV;
|
|
|
|
|
2008-05-13 03:20:43 +08:00
|
|
|
iter = kzalloc(sizeof(*iter), GFP_KERNEL);
|
|
|
|
if (!iter)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2008-05-22 23:46:33 +08:00
|
|
|
mutex_lock(&ftrace_regex_lock);
|
2008-05-13 03:20:43 +08:00
|
|
|
if ((file->f_mode & FMODE_WRITE) &&
|
|
|
|
!(file->f_flags & O_APPEND))
|
2008-05-22 23:46:33 +08:00
|
|
|
ftrace_filter_reset(enable);
|
2008-05-13 03:20:43 +08:00
|
|
|
|
|
|
|
if (file->f_mode & FMODE_READ) {
|
|
|
|
iter->pg = ftrace_pages_start;
|
2008-05-22 23:46:33 +08:00
|
|
|
iter->flags = enable ? FTRACE_ITER_FILTER :
|
|
|
|
FTRACE_ITER_NOTRACE;
|
2008-05-13 03:20:43 +08:00
|
|
|
|
|
|
|
ret = seq_open(file, &show_ftrace_seq_ops);
|
|
|
|
if (!ret) {
|
|
|
|
struct seq_file *m = file->private_data;
|
|
|
|
m->private = iter;
|
|
|
|
} else
|
|
|
|
kfree(iter);
|
|
|
|
} else
|
|
|
|
file->private_data = iter;
|
2008-05-22 23:46:33 +08:00
|
|
|
mutex_unlock(&ftrace_regex_lock);
|
2008-05-13 03:20:43 +08:00
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2008-05-22 23:46:33 +08:00
|
|
|
static int
|
|
|
|
ftrace_filter_open(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
return ftrace_regex_open(inode, file, 1);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
ftrace_notrace_open(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
return ftrace_regex_open(inode, file, 0);
|
|
|
|
}
|
|
|
|
|
2008-05-13 03:20:51 +08:00
|
|
|
static loff_t
|
2008-05-22 23:46:33 +08:00
|
|
|
ftrace_regex_lseek(struct file *file, loff_t offset, int origin)
|
2008-05-13 03:20:43 +08:00
|
|
|
{
|
|
|
|
loff_t ret;
|
|
|
|
|
|
|
|
if (file->f_mode & FMODE_READ)
|
|
|
|
ret = seq_lseek(file, offset, origin);
|
|
|
|
else
|
|
|
|
file->f_pos = ret = 1;
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
enum {
|
|
|
|
MATCH_FULL,
|
|
|
|
MATCH_FRONT_ONLY,
|
|
|
|
MATCH_MIDDLE_ONLY,
|
|
|
|
MATCH_END_ONLY,
|
|
|
|
};
|
|
|
|
|
2009-02-14 04:56:43 +08:00
|
|
|
/*
|
|
|
|
* (static function - no need for kernel doc)
|
|
|
|
*
|
|
|
|
* Pass in a buffer containing a glob and this function will
|
|
|
|
* set search to point to the search part of the buffer and
|
|
|
|
* return the type of search it is (see enum above).
|
|
|
|
* This does modify buff.
|
|
|
|
*
|
|
|
|
* Returns enum type.
|
|
|
|
* search returns the pointer to use for comparison.
|
|
|
|
* not returns 1 if buff started with a '!'
|
|
|
|
* 0 otherwise.
|
|
|
|
*/
|
|
|
|
static int
|
2009-02-14 06:08:48 +08:00
|
|
|
ftrace_setup_glob(char *buff, int len, char **search, int *not)
|
2008-05-13 03:20:43 +08:00
|
|
|
{
|
|
|
|
int type = MATCH_FULL;
|
2009-02-14 04:56:43 +08:00
|
|
|
int i;
|
2008-12-18 04:05:36 +08:00
|
|
|
|
|
|
|
if (buff[0] == '!') {
|
2009-02-14 04:56:43 +08:00
|
|
|
*not = 1;
|
2008-12-18 04:05:36 +08:00
|
|
|
buff++;
|
|
|
|
len--;
|
2009-02-14 04:56:43 +08:00
|
|
|
} else
|
|
|
|
*not = 0;
|
|
|
|
|
|
|
|
*search = buff;
|
2008-05-13 03:20:43 +08:00
|
|
|
|
|
|
|
for (i = 0; i < len; i++) {
|
|
|
|
if (buff[i] == '*') {
|
|
|
|
if (!i) {
|
2009-02-14 04:56:43 +08:00
|
|
|
*search = buff + 1;
|
2008-05-13 03:20:43 +08:00
|
|
|
type = MATCH_END_ONLY;
|
|
|
|
} else {
|
2009-02-14 04:56:43 +08:00
|
|
|
if (type == MATCH_END_ONLY)
|
2008-05-13 03:20:43 +08:00
|
|
|
type = MATCH_MIDDLE_ONLY;
|
2009-02-14 04:56:43 +08:00
|
|
|
else
|
2008-05-13 03:20:43 +08:00
|
|
|
type = MATCH_FRONT_ONLY;
|
|
|
|
buff[i] = 0;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2009-02-14 04:56:43 +08:00
|
|
|
return type;
|
|
|
|
}
|
|
|
|
|
2009-02-14 06:08:48 +08:00
|
|
|
static int ftrace_match(char *str, char *regex, int len, int type)
|
2009-02-14 04:56:43 +08:00
|
|
|
{
|
|
|
|
int matched = 0;
|
|
|
|
char *ptr;
|
|
|
|
|
|
|
|
switch (type) {
|
|
|
|
case MATCH_FULL:
|
|
|
|
if (strcmp(str, regex) == 0)
|
|
|
|
matched = 1;
|
|
|
|
break;
|
|
|
|
case MATCH_FRONT_ONLY:
|
|
|
|
if (strncmp(str, regex, len) == 0)
|
|
|
|
matched = 1;
|
|
|
|
break;
|
|
|
|
case MATCH_MIDDLE_ONLY:
|
|
|
|
if (strstr(str, regex))
|
|
|
|
matched = 1;
|
|
|
|
break;
|
|
|
|
case MATCH_END_ONLY:
|
|
|
|
ptr = strstr(str, regex);
|
|
|
|
if (ptr && (ptr[len] == 0))
|
|
|
|
matched = 1;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
return matched;
|
|
|
|
}
|
|
|
|
|
2009-02-14 06:08:48 +08:00
|
|
|
static int
|
|
|
|
ftrace_match_record(struct dyn_ftrace *rec, char *regex, int len, int type)
|
|
|
|
{
|
|
|
|
char str[KSYM_SYMBOL_LEN];
|
|
|
|
|
|
|
|
kallsyms_lookup(rec->ip, NULL, NULL, NULL, str);
|
|
|
|
return ftrace_match(str, regex, len, type);
|
|
|
|
}
|
|
|
|
|
2009-02-14 04:56:43 +08:00
|
|
|
static void ftrace_match_records(char *buff, int len, int enable)
|
|
|
|
{
|
2009-02-18 00:20:26 +08:00
|
|
|
unsigned int search_len;
|
2009-02-14 04:56:43 +08:00
|
|
|
struct ftrace_page *pg;
|
|
|
|
struct dyn_ftrace *rec;
|
2009-02-18 00:20:26 +08:00
|
|
|
unsigned long flag;
|
|
|
|
char *search;
|
2009-02-14 04:56:43 +08:00
|
|
|
int type;
|
|
|
|
int not;
|
|
|
|
|
2009-02-18 00:20:26 +08:00
|
|
|
flag = enable ? FTRACE_FL_FILTER : FTRACE_FL_NOTRACE;
|
2009-02-14 04:56:43 +08:00
|
|
|
type = ftrace_setup_glob(buff, len, &search, ¬);
|
|
|
|
|
|
|
|
search_len = strlen(search);
|
|
|
|
|
2009-02-14 14:15:39 +08:00
|
|
|
mutex_lock(&ftrace_lock);
|
2009-02-14 01:43:56 +08:00
|
|
|
do_for_each_ftrace_rec(pg, rec) {
|
|
|
|
|
|
|
|
if (rec->flags & FTRACE_FL_FAILED)
|
|
|
|
continue;
|
2009-02-14 04:56:43 +08:00
|
|
|
|
|
|
|
if (ftrace_match_record(rec, search, search_len, type)) {
|
2009-02-14 01:43:56 +08:00
|
|
|
if (not)
|
|
|
|
rec->flags &= ~flag;
|
|
|
|
else
|
|
|
|
rec->flags |= flag;
|
|
|
|
}
|
2009-02-14 09:53:42 +08:00
|
|
|
/*
|
|
|
|
* Only enable filtering if we have a function that
|
|
|
|
* is filtered on.
|
|
|
|
*/
|
|
|
|
if (enable && (rec->flags & FTRACE_FL_FILTER))
|
|
|
|
ftrace_filtered = 1;
|
2009-02-14 01:43:56 +08:00
|
|
|
} while_for_each_ftrace_rec();
|
2009-02-14 14:15:39 +08:00
|
|
|
mutex_unlock(&ftrace_lock);
|
2008-05-13 03:20:43 +08:00
|
|
|
}
|
|
|
|
|
2009-02-14 06:08:48 +08:00
|
|
|
static int
|
|
|
|
ftrace_match_module_record(struct dyn_ftrace *rec, char *mod,
|
|
|
|
char *regex, int len, int type)
|
|
|
|
{
|
|
|
|
char str[KSYM_SYMBOL_LEN];
|
|
|
|
char *modname;
|
|
|
|
|
|
|
|
kallsyms_lookup(rec->ip, NULL, NULL, &modname, str);
|
|
|
|
|
|
|
|
if (!modname || strcmp(modname, mod))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
/* blank search means to match all funcs in the mod */
|
|
|
|
if (len)
|
|
|
|
return ftrace_match(str, regex, len, type);
|
|
|
|
else
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void ftrace_match_module_records(char *buff, char *mod, int enable)
|
|
|
|
{
|
2009-02-18 00:20:26 +08:00
|
|
|
unsigned search_len = 0;
|
2009-02-14 06:08:48 +08:00
|
|
|
struct ftrace_page *pg;
|
|
|
|
struct dyn_ftrace *rec;
|
|
|
|
int type = MATCH_FULL;
|
2009-02-18 00:20:26 +08:00
|
|
|
char *search = buff;
|
|
|
|
unsigned long flag;
|
2009-02-14 06:08:48 +08:00
|
|
|
int not = 0;
|
|
|
|
|
2009-02-18 00:20:26 +08:00
|
|
|
flag = enable ? FTRACE_FL_FILTER : FTRACE_FL_NOTRACE;
|
|
|
|
|
2009-02-14 06:08:48 +08:00
|
|
|
/* blank or '*' mean the same */
|
|
|
|
if (strcmp(buff, "*") == 0)
|
|
|
|
buff[0] = 0;
|
|
|
|
|
|
|
|
/* handle the case of 'dont filter this module' */
|
|
|
|
if (strcmp(buff, "!") == 0 || strcmp(buff, "!*") == 0) {
|
|
|
|
buff[0] = 0;
|
|
|
|
not = 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (strlen(buff)) {
|
|
|
|
type = ftrace_setup_glob(buff, strlen(buff), &search, ¬);
|
|
|
|
search_len = strlen(search);
|
|
|
|
}
|
|
|
|
|
2009-02-14 14:15:39 +08:00
|
|
|
mutex_lock(&ftrace_lock);
|
2009-02-14 06:08:48 +08:00
|
|
|
do_for_each_ftrace_rec(pg, rec) {
|
|
|
|
|
|
|
|
if (rec->flags & FTRACE_FL_FAILED)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (ftrace_match_module_record(rec, mod,
|
|
|
|
search, search_len, type)) {
|
|
|
|
if (not)
|
|
|
|
rec->flags &= ~flag;
|
|
|
|
else
|
|
|
|
rec->flags |= flag;
|
|
|
|
}
|
2009-02-14 09:53:42 +08:00
|
|
|
if (enable && (rec->flags & FTRACE_FL_FILTER))
|
|
|
|
ftrace_filtered = 1;
|
2009-02-14 06:08:48 +08:00
|
|
|
|
|
|
|
} while_for_each_ftrace_rec();
|
2009-02-14 14:15:39 +08:00
|
|
|
mutex_unlock(&ftrace_lock);
|
2009-02-14 06:08:48 +08:00
|
|
|
}
|
|
|
|
|
2009-02-14 13:40:25 +08:00
|
|
|
/*
|
|
|
|
* We register the module command as a template to show others how
|
|
|
|
* to register the a command as well.
|
|
|
|
*/
|
|
|
|
|
|
|
|
static int
|
|
|
|
ftrace_mod_callback(char *func, char *cmd, char *param, int enable)
|
|
|
|
{
|
|
|
|
char *mod;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* cmd == 'mod' because we only registered this func
|
|
|
|
* for the 'mod' ftrace_func_command.
|
|
|
|
* But if you register one func with multiple commands,
|
|
|
|
* you can tell which command was used by the cmd
|
|
|
|
* parameter.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* we must have a module name */
|
|
|
|
if (!param)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
mod = strsep(¶m, ":");
|
|
|
|
if (!strlen(mod))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
ftrace_match_module_records(func, mod, enable);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct ftrace_func_command ftrace_mod_cmd = {
|
|
|
|
.name = "mod",
|
|
|
|
.func = ftrace_mod_callback,
|
|
|
|
};
|
|
|
|
|
|
|
|
static int __init ftrace_mod_cmd_init(void)
|
|
|
|
{
|
|
|
|
return register_ftrace_command(&ftrace_mod_cmd);
|
|
|
|
}
|
|
|
|
device_initcall(ftrace_mod_cmd_init);
|
|
|
|
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
static void
|
2009-02-18 01:32:04 +08:00
|
|
|
function_trace_probe_call(unsigned long ip, unsigned long parent_ip)
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
{
|
2009-02-18 01:32:04 +08:00
|
|
|
struct ftrace_func_probe *entry;
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
struct hlist_head *hhd;
|
|
|
|
struct hlist_node *n;
|
|
|
|
unsigned long key;
|
|
|
|
int resched;
|
|
|
|
|
|
|
|
key = hash_long(ip, FTRACE_HASH_BITS);
|
|
|
|
|
|
|
|
hhd = &ftrace_func_hash[key];
|
|
|
|
|
|
|
|
if (hlist_empty(hhd))
|
|
|
|
return;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Disable preemption for these calls to prevent a RCU grace
|
|
|
|
* period. This syncs the hash iteration and freeing of items
|
|
|
|
* on the hash. rcu_read_lock is too dangerous here.
|
|
|
|
*/
|
|
|
|
resched = ftrace_preempt_disable();
|
|
|
|
hlist_for_each_entry_rcu(entry, n, hhd, node) {
|
|
|
|
if (entry->ip == ip)
|
|
|
|
entry->ops->func(ip, parent_ip, &entry->data);
|
|
|
|
}
|
|
|
|
ftrace_preempt_enable(resched);
|
|
|
|
}
|
|
|
|
|
2009-02-18 01:32:04 +08:00
|
|
|
static struct ftrace_ops trace_probe_ops __read_mostly =
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
{
|
2009-02-18 01:32:04 +08:00
|
|
|
.func = function_trace_probe_call,
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
};
|
|
|
|
|
2009-02-18 01:32:04 +08:00
|
|
|
static int ftrace_probe_registered;
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
|
2009-02-18 01:32:04 +08:00
|
|
|
static void __enable_ftrace_function_probe(void)
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
2009-02-18 01:32:04 +08:00
|
|
|
if (ftrace_probe_registered)
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
return;
|
|
|
|
|
|
|
|
for (i = 0; i < FTRACE_FUNC_HASHSIZE; i++) {
|
|
|
|
struct hlist_head *hhd = &ftrace_func_hash[i];
|
|
|
|
if (hhd->first)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
/* Nothing registered? */
|
|
|
|
if (i == FTRACE_FUNC_HASHSIZE)
|
|
|
|
return;
|
|
|
|
|
2009-02-18 01:32:04 +08:00
|
|
|
__register_ftrace_function(&trace_probe_ops);
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
ftrace_startup(0);
|
2009-02-18 01:32:04 +08:00
|
|
|
ftrace_probe_registered = 1;
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
}
|
|
|
|
|
2009-02-18 01:32:04 +08:00
|
|
|
static void __disable_ftrace_function_probe(void)
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
2009-02-18 01:32:04 +08:00
|
|
|
if (!ftrace_probe_registered)
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
return;
|
|
|
|
|
|
|
|
for (i = 0; i < FTRACE_FUNC_HASHSIZE; i++) {
|
|
|
|
struct hlist_head *hhd = &ftrace_func_hash[i];
|
|
|
|
if (hhd->first)
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* no more funcs left */
|
2009-02-18 01:32:04 +08:00
|
|
|
__unregister_ftrace_function(&trace_probe_ops);
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
ftrace_shutdown(0);
|
2009-02-18 01:32:04 +08:00
|
|
|
ftrace_probe_registered = 0;
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static void ftrace_free_entry_rcu(struct rcu_head *rhp)
|
|
|
|
{
|
2009-02-18 01:32:04 +08:00
|
|
|
struct ftrace_func_probe *entry =
|
|
|
|
container_of(rhp, struct ftrace_func_probe, rcu);
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
|
|
|
|
if (entry->ops->free)
|
|
|
|
entry->ops->free(&entry->data);
|
|
|
|
kfree(entry);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
int
|
2009-02-18 01:32:04 +08:00
|
|
|
register_ftrace_function_probe(char *glob, struct ftrace_probe_ops *ops,
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
void *data)
|
|
|
|
{
|
2009-02-18 01:32:04 +08:00
|
|
|
struct ftrace_func_probe *entry;
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
struct ftrace_page *pg;
|
|
|
|
struct dyn_ftrace *rec;
|
|
|
|
int type, len, not;
|
2009-02-18 00:20:26 +08:00
|
|
|
unsigned long key;
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
int count = 0;
|
|
|
|
char *search;
|
|
|
|
|
|
|
|
type = ftrace_setup_glob(glob, strlen(glob), &search, ¬);
|
|
|
|
len = strlen(search);
|
|
|
|
|
2009-02-18 01:32:04 +08:00
|
|
|
/* we do not support '!' for function probes */
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
if (WARN_ON(not))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
mutex_lock(&ftrace_lock);
|
|
|
|
do_for_each_ftrace_rec(pg, rec) {
|
|
|
|
|
|
|
|
if (rec->flags & FTRACE_FL_FAILED)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (!ftrace_match_record(rec, search, len, type))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
entry = kmalloc(sizeof(*entry), GFP_KERNEL);
|
|
|
|
if (!entry) {
|
2009-02-18 01:32:04 +08:00
|
|
|
/* If we did not process any, then return error */
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
if (!count)
|
|
|
|
count = -ENOMEM;
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
|
|
|
|
|
|
|
count++;
|
|
|
|
|
|
|
|
entry->data = data;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The caller might want to do something special
|
|
|
|
* for each function we find. We call the callback
|
|
|
|
* to give the caller an opportunity to do so.
|
|
|
|
*/
|
|
|
|
if (ops->callback) {
|
|
|
|
if (ops->callback(rec->ip, &entry->data) < 0) {
|
|
|
|
/* caller does not like this func */
|
|
|
|
kfree(entry);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
entry->ops = ops;
|
|
|
|
entry->ip = rec->ip;
|
|
|
|
|
|
|
|
key = hash_long(entry->ip, FTRACE_HASH_BITS);
|
|
|
|
hlist_add_head_rcu(&entry->node, &ftrace_func_hash[key]);
|
|
|
|
|
|
|
|
} while_for_each_ftrace_rec();
|
2009-02-18 01:32:04 +08:00
|
|
|
__enable_ftrace_function_probe();
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
|
|
|
|
out_unlock:
|
|
|
|
mutex_unlock(&ftrace_lock);
|
|
|
|
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
|
|
|
enum {
|
2009-02-18 01:32:04 +08:00
|
|
|
PROBE_TEST_FUNC = 1,
|
|
|
|
PROBE_TEST_DATA = 2
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
static void
|
2009-02-18 01:32:04 +08:00
|
|
|
__unregister_ftrace_function_probe(char *glob, struct ftrace_probe_ops *ops,
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
void *data, int flags)
|
|
|
|
{
|
2009-02-18 01:32:04 +08:00
|
|
|
struct ftrace_func_probe *entry;
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
struct hlist_node *n, *tmp;
|
|
|
|
char str[KSYM_SYMBOL_LEN];
|
|
|
|
int type = MATCH_FULL;
|
|
|
|
int i, len = 0;
|
|
|
|
char *search;
|
|
|
|
|
|
|
|
if (glob && (strcmp(glob, "*") || !strlen(glob)))
|
|
|
|
glob = NULL;
|
|
|
|
else {
|
|
|
|
int not;
|
|
|
|
|
|
|
|
type = ftrace_setup_glob(glob, strlen(glob), &search, ¬);
|
|
|
|
len = strlen(search);
|
|
|
|
|
2009-02-18 01:32:04 +08:00
|
|
|
/* we do not support '!' for function probes */
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
if (WARN_ON(not))
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
mutex_lock(&ftrace_lock);
|
|
|
|
for (i = 0; i < FTRACE_FUNC_HASHSIZE; i++) {
|
|
|
|
struct hlist_head *hhd = &ftrace_func_hash[i];
|
|
|
|
|
|
|
|
hlist_for_each_entry_safe(entry, n, tmp, hhd, node) {
|
|
|
|
|
|
|
|
/* break up if statements for readability */
|
2009-02-18 01:32:04 +08:00
|
|
|
if ((flags & PROBE_TEST_FUNC) && entry->ops != ops)
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
continue;
|
|
|
|
|
2009-02-18 01:32:04 +08:00
|
|
|
if ((flags & PROBE_TEST_DATA) && entry->data != data)
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
continue;
|
|
|
|
|
|
|
|
/* do this last, since it is the most expensive */
|
|
|
|
if (glob) {
|
|
|
|
kallsyms_lookup(entry->ip, NULL, NULL,
|
|
|
|
NULL, str);
|
|
|
|
if (!ftrace_match(str, glob, len, type))
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
hlist_del(&entry->node);
|
|
|
|
call_rcu(&entry->rcu, ftrace_free_entry_rcu);
|
|
|
|
}
|
|
|
|
}
|
2009-02-18 01:32:04 +08:00
|
|
|
__disable_ftrace_function_probe();
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
mutex_unlock(&ftrace_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2009-02-18 01:32:04 +08:00
|
|
|
unregister_ftrace_function_probe(char *glob, struct ftrace_probe_ops *ops,
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
void *data)
|
|
|
|
{
|
2009-02-18 01:32:04 +08:00
|
|
|
__unregister_ftrace_function_probe(glob, ops, data,
|
|
|
|
PROBE_TEST_FUNC | PROBE_TEST_DATA);
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2009-02-18 01:32:04 +08:00
|
|
|
unregister_ftrace_function_probe_func(char *glob, struct ftrace_probe_ops *ops)
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
{
|
2009-02-18 01:32:04 +08:00
|
|
|
__unregister_ftrace_function_probe(glob, ops, NULL, PROBE_TEST_FUNC);
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
}
|
|
|
|
|
2009-02-18 01:32:04 +08:00
|
|
|
void unregister_ftrace_function_probe_all(char *glob)
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
{
|
2009-02-18 01:32:04 +08:00
|
|
|
__unregister_ftrace_function_probe(glob, NULL, NULL, 0);
|
ftrace: trace different functions with a different tracer
Impact: new feature
Currently, the function tracer only gives you an ability to hook
a tracer to all functions being traced. The dynamic function trace
allows you to pick and choose which of those functions will be
traced, but all functions being traced will call all tracers that
registered with the function tracer.
This patch adds a new feature that allows a tracer to hook to specific
functions, even when all functions are being traced. It allows for
different functions to call different tracer hooks.
The way this is accomplished is by a special function that will hook
to the function tracer and will set up a hash table knowing which
tracer hook to call with which function. This is the most general
and easiest method to accomplish this. Later, an arch may choose
to supply their own method in changing the mcount call of a function
to call a different tracer. But that will be an exercise for the
future.
To register a function:
struct ftrace_hook_ops {
void (*func)(unsigned long ip,
unsigned long parent_ip,
void **data);
int (*callback)(unsigned long ip, void **data);
void (*free)(void **data);
};
int register_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data);
glob is a simple glob to search for the functions to hook.
ops is a pointer to the operations (listed below)
data is the default data to be passed to the hook functions when traced
ops:
func is the hook function to call when the functions are traced
callback is a callback function that is called when setting up the hash.
That is, if the tracer needs to do something special for each
function, that is being traced, and wants to give each function
its own data. The address of the entry data is passed to this
callback, so that the callback may wish to update the entry to
whatever it would like.
free is a callback for when the entry is freed. In case the tracer
allocated any data, it is give the chance to free it.
To unregister we have three functions:
void
unregister_ftrace_function_hook(char *glob, struct ftrace_hook_ops *ops,
void *data)
This will unregister all hooks that match glob, point to ops, and
have its data matching data. (note, if glob is NULL, blank or '*',
all functions will be tested).
void
unregister_ftrace_function_hook_func(char *glob,
struct ftrace_hook_ops *ops)
This will unregister all functions matching glob that has an entry
pointing to ops.
void unregister_ftrace_function_hook_all(char *glob)
This simply unregisters all funcs.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2009-02-15 04:29:06 +08:00
|
|
|
}
|
|
|
|
|
2009-02-14 13:40:25 +08:00
|
|
|
static LIST_HEAD(ftrace_commands);
|
|
|
|
static DEFINE_MUTEX(ftrace_cmd_mutex);
|
|
|
|
|
|
|
|
int register_ftrace_command(struct ftrace_func_command *cmd)
|
|
|
|
{
|
|
|
|
struct ftrace_func_command *p;
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
mutex_lock(&ftrace_cmd_mutex);
|
|
|
|
list_for_each_entry(p, &ftrace_commands, list) {
|
|
|
|
if (strcmp(cmd->name, p->name) == 0) {
|
|
|
|
ret = -EBUSY;
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
list_add(&cmd->list, &ftrace_commands);
|
|
|
|
out_unlock:
|
|
|
|
mutex_unlock(&ftrace_cmd_mutex);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
int unregister_ftrace_command(struct ftrace_func_command *cmd)
|
|
|
|
{
|
|
|
|
struct ftrace_func_command *p, *n;
|
|
|
|
int ret = -ENODEV;
|
|
|
|
|
|
|
|
mutex_lock(&ftrace_cmd_mutex);
|
|
|
|
list_for_each_entry_safe(p, n, &ftrace_commands, list) {
|
|
|
|
if (strcmp(cmd->name, p->name) == 0) {
|
|
|
|
ret = 0;
|
|
|
|
list_del_init(&p->list);
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
out_unlock:
|
|
|
|
mutex_unlock(&ftrace_cmd_mutex);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2009-02-14 06:08:48 +08:00
|
|
|
static int ftrace_process_regex(char *buff, int len, int enable)
|
|
|
|
{
|
2009-02-14 13:40:25 +08:00
|
|
|
char *func, *command, *next = buff;
|
2009-02-18 00:20:26 +08:00
|
|
|
struct ftrace_func_command *p;
|
2009-02-14 13:40:25 +08:00
|
|
|
int ret = -EINVAL;
|
2009-02-14 06:08:48 +08:00
|
|
|
|
|
|
|
func = strsep(&next, ":");
|
|
|
|
|
|
|
|
if (!next) {
|
|
|
|
ftrace_match_records(func, len, enable);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2009-02-14 13:40:25 +08:00
|
|
|
/* command found */
|
2009-02-14 06:08:48 +08:00
|
|
|
|
|
|
|
command = strsep(&next, ":");
|
|
|
|
|
2009-02-14 13:40:25 +08:00
|
|
|
mutex_lock(&ftrace_cmd_mutex);
|
|
|
|
list_for_each_entry(p, &ftrace_commands, list) {
|
|
|
|
if (strcmp(p->name, command) == 0) {
|
|
|
|
ret = p->func(func, command, next, enable);
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
2009-02-14 06:08:48 +08:00
|
|
|
}
|
2009-02-14 13:40:25 +08:00
|
|
|
out_unlock:
|
|
|
|
mutex_unlock(&ftrace_cmd_mutex);
|
2009-02-14 06:08:48 +08:00
|
|
|
|
2009-02-14 13:40:25 +08:00
|
|
|
return ret;
|
2009-02-14 06:08:48 +08:00
|
|
|
}
|
|
|
|
|
2008-05-13 03:20:51 +08:00
|
|
|
static ssize_t
|
2008-05-22 23:46:33 +08:00
|
|
|
ftrace_regex_write(struct file *file, const char __user *ubuf,
|
|
|
|
size_t cnt, loff_t *ppos, int enable)
|
2008-05-13 03:20:43 +08:00
|
|
|
{
|
|
|
|
struct ftrace_iterator *iter;
|
|
|
|
char ch;
|
|
|
|
size_t read = 0;
|
|
|
|
ssize_t ret;
|
|
|
|
|
|
|
|
if (!cnt || cnt < 0)
|
|
|
|
return 0;
|
|
|
|
|
2008-05-22 23:46:33 +08:00
|
|
|
mutex_lock(&ftrace_regex_lock);
|
2008-05-13 03:20:43 +08:00
|
|
|
|
|
|
|
if (file->f_mode & FMODE_READ) {
|
|
|
|
struct seq_file *m = file->private_data;
|
|
|
|
iter = m->private;
|
|
|
|
} else
|
|
|
|
iter = file->private_data;
|
|
|
|
|
|
|
|
if (!*ppos) {
|
|
|
|
iter->flags &= ~FTRACE_ITER_CONT;
|
|
|
|
iter->buffer_idx = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = get_user(ch, ubuf++);
|
|
|
|
if (ret)
|
|
|
|
goto out;
|
|
|
|
read++;
|
|
|
|
cnt--;
|
|
|
|
|
|
|
|
if (!(iter->flags & ~FTRACE_ITER_CONT)) {
|
|
|
|
/* skip white space */
|
|
|
|
while (cnt && isspace(ch)) {
|
|
|
|
ret = get_user(ch, ubuf++);
|
|
|
|
if (ret)
|
|
|
|
goto out;
|
|
|
|
read++;
|
|
|
|
cnt--;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (isspace(ch)) {
|
|
|
|
file->f_pos += read;
|
|
|
|
ret = read;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
iter->buffer_idx = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
while (cnt && !isspace(ch)) {
|
|
|
|
if (iter->buffer_idx < FTRACE_BUFF_MAX)
|
|
|
|
iter->buffer[iter->buffer_idx++] = ch;
|
|
|
|
else {
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
ret = get_user(ch, ubuf++);
|
|
|
|
if (ret)
|
|
|
|
goto out;
|
|
|
|
read++;
|
|
|
|
cnt--;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (isspace(ch)) {
|
|
|
|
iter->filtered++;
|
|
|
|
iter->buffer[iter->buffer_idx] = 0;
|
2009-02-14 06:08:48 +08:00
|
|
|
ret = ftrace_process_regex(iter->buffer,
|
|
|
|
iter->buffer_idx, enable);
|
|
|
|
if (ret)
|
|
|
|
goto out;
|
2008-05-13 03:20:43 +08:00
|
|
|
iter->buffer_idx = 0;
|
|
|
|
} else
|
|
|
|
iter->flags |= FTRACE_ITER_CONT;
|
|
|
|
|
|
|
|
|
|
|
|
file->f_pos += read;
|
|
|
|
|
|
|
|
ret = read;
|
|
|
|
out:
|
2008-05-22 23:46:33 +08:00
|
|
|
mutex_unlock(&ftrace_regex_lock);
|
2008-05-13 03:20:43 +08:00
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2008-05-22 23:46:33 +08:00
|
|
|
static ssize_t
|
|
|
|
ftrace_filter_write(struct file *file, const char __user *ubuf,
|
|
|
|
size_t cnt, loff_t *ppos)
|
|
|
|
{
|
|
|
|
return ftrace_regex_write(file, ubuf, cnt, ppos, 1);
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t
|
|
|
|
ftrace_notrace_write(struct file *file, const char __user *ubuf,
|
|
|
|
size_t cnt, loff_t *ppos)
|
|
|
|
{
|
|
|
|
return ftrace_regex_write(file, ubuf, cnt, ppos, 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
ftrace_set_regex(unsigned char *buf, int len, int reset, int enable)
|
|
|
|
{
|
|
|
|
if (unlikely(ftrace_disabled))
|
|
|
|
return;
|
|
|
|
|
|
|
|
mutex_lock(&ftrace_regex_lock);
|
|
|
|
if (reset)
|
|
|
|
ftrace_filter_reset(enable);
|
|
|
|
if (buf)
|
2009-02-14 03:37:33 +08:00
|
|
|
ftrace_match_records(buf, len, enable);
|
2008-05-22 23:46:33 +08:00
|
|
|
mutex_unlock(&ftrace_regex_lock);
|
|
|
|
}
|
|
|
|
|
2008-05-13 03:20:45 +08:00
|
|
|
/**
|
|
|
|
* ftrace_set_filter - set a function to filter on in ftrace
|
|
|
|
* @buf - the string that holds the function filter text.
|
|
|
|
* @len - the length of the string.
|
|
|
|
* @reset - non zero to reset all filters before applying this filter.
|
|
|
|
*
|
|
|
|
* Filters denote which functions should be enabled when tracing is enabled.
|
|
|
|
* If @buf is NULL and reset is set, all functions will be enabled for tracing.
|
|
|
|
*/
|
2008-05-13 03:20:51 +08:00
|
|
|
void ftrace_set_filter(unsigned char *buf, int len, int reset)
|
2008-05-13 03:20:45 +08:00
|
|
|
{
|
2008-05-22 23:46:33 +08:00
|
|
|
ftrace_set_regex(buf, len, reset, 1);
|
|
|
|
}
|
2008-05-13 03:20:48 +08:00
|
|
|
|
2008-05-22 23:46:33 +08:00
|
|
|
/**
|
|
|
|
* ftrace_set_notrace - set a function to not trace in ftrace
|
|
|
|
* @buf - the string that holds the function notrace text.
|
|
|
|
* @len - the length of the string.
|
|
|
|
* @reset - non zero to reset all filters before applying this filter.
|
|
|
|
*
|
|
|
|
* Notrace Filters denote which functions should not be enabled when tracing
|
|
|
|
* is enabled. If @buf is NULL and reset is set, all functions will be enabled
|
|
|
|
* for tracing.
|
|
|
|
*/
|
|
|
|
void ftrace_set_notrace(unsigned char *buf, int len, int reset)
|
|
|
|
{
|
|
|
|
ftrace_set_regex(buf, len, reset, 0);
|
2008-05-13 03:20:45 +08:00
|
|
|
}
|
|
|
|
|
2008-05-13 03:20:51 +08:00
|
|
|
static int
|
2008-05-22 23:46:33 +08:00
|
|
|
ftrace_regex_release(struct inode *inode, struct file *file, int enable)
|
2008-05-13 03:20:43 +08:00
|
|
|
{
|
|
|
|
struct seq_file *m = (struct seq_file *)file->private_data;
|
|
|
|
struct ftrace_iterator *iter;
|
|
|
|
|
2008-05-22 23:46:33 +08:00
|
|
|
mutex_lock(&ftrace_regex_lock);
|
2008-05-13 03:20:43 +08:00
|
|
|
if (file->f_mode & FMODE_READ) {
|
|
|
|
iter = m->private;
|
|
|
|
|
|
|
|
seq_release(inode, file);
|
|
|
|
} else
|
|
|
|
iter = file->private_data;
|
|
|
|
|
|
|
|
if (iter->buffer_idx) {
|
|
|
|
iter->filtered++;
|
|
|
|
iter->buffer[iter->buffer_idx] = 0;
|
2009-02-14 03:37:33 +08:00
|
|
|
ftrace_match_records(iter->buffer, iter->buffer_idx, enable);
|
2008-05-13 03:20:43 +08:00
|
|
|
}
|
|
|
|
|
2009-02-14 14:42:44 +08:00
|
|
|
mutex_lock(&ftrace_lock);
|
2008-11-16 05:31:41 +08:00
|
|
|
if (ftrace_start_up && ftrace_enabled)
|
2008-05-13 03:20:43 +08:00
|
|
|
ftrace_run_update_code(FTRACE_ENABLE_CALLS);
|
2009-02-14 14:42:44 +08:00
|
|
|
mutex_unlock(&ftrace_lock);
|
2008-05-13 03:20:43 +08:00
|
|
|
|
|
|
|
kfree(iter);
|
2008-05-22 23:46:33 +08:00
|
|
|
mutex_unlock(&ftrace_regex_lock);
|
2008-05-13 03:20:43 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-05-22 23:46:33 +08:00
|
|
|
static int
|
|
|
|
ftrace_filter_release(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
return ftrace_regex_release(inode, file, 1);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
ftrace_notrace_release(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
return ftrace_regex_release(inode, file, 0);
|
|
|
|
}
|
|
|
|
|
2009-03-06 10:44:55 +08:00
|
|
|
static const struct file_operations ftrace_avail_fops = {
|
2008-05-13 03:20:43 +08:00
|
|
|
.open = ftrace_avail_open,
|
|
|
|
.read = seq_read,
|
|
|
|
.llseek = seq_lseek,
|
|
|
|
.release = ftrace_avail_release,
|
|
|
|
};
|
|
|
|
|
2009-03-06 10:44:55 +08:00
|
|
|
static const struct file_operations ftrace_failures_fops = {
|
2008-06-02 00:17:54 +08:00
|
|
|
.open = ftrace_failures_open,
|
|
|
|
.read = seq_read,
|
|
|
|
.llseek = seq_lseek,
|
|
|
|
.release = ftrace_avail_release,
|
|
|
|
};
|
|
|
|
|
2009-03-06 10:44:55 +08:00
|
|
|
static const struct file_operations ftrace_filter_fops = {
|
2008-05-13 03:20:43 +08:00
|
|
|
.open = ftrace_filter_open,
|
2009-03-13 17:47:23 +08:00
|
|
|
.read = seq_read,
|
2008-05-13 03:20:43 +08:00
|
|
|
.write = ftrace_filter_write,
|
2008-05-22 23:46:33 +08:00
|
|
|
.llseek = ftrace_regex_lseek,
|
2008-05-13 03:20:43 +08:00
|
|
|
.release = ftrace_filter_release,
|
|
|
|
};
|
|
|
|
|
2009-03-06 10:44:55 +08:00
|
|
|
static const struct file_operations ftrace_notrace_fops = {
|
2008-05-22 23:46:33 +08:00
|
|
|
.open = ftrace_notrace_open,
|
2009-03-13 17:47:23 +08:00
|
|
|
.read = seq_read,
|
2008-05-22 23:46:33 +08:00
|
|
|
.write = ftrace_notrace_write,
|
|
|
|
.llseek = ftrace_regex_lseek,
|
|
|
|
.release = ftrace_notrace_release,
|
|
|
|
};
|
|
|
|
|
2008-12-04 04:36:57 +08:00
|
|
|
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
|
|
|
|
|
|
|
static DEFINE_MUTEX(graph_lock);
|
|
|
|
|
|
|
|
int ftrace_graph_count;
|
|
|
|
unsigned long ftrace_graph_funcs[FTRACE_GRAPH_MAX_FUNCS] __read_mostly;
|
|
|
|
|
|
|
|
static void *
|
|
|
|
g_next(struct seq_file *m, void *v, loff_t *pos)
|
|
|
|
{
|
|
|
|
unsigned long *array = m->private;
|
|
|
|
int index = *pos;
|
|
|
|
|
|
|
|
(*pos)++;
|
|
|
|
|
|
|
|
if (index >= ftrace_graph_count)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
return &array[index];
|
|
|
|
}
|
|
|
|
|
|
|
|
static void *g_start(struct seq_file *m, loff_t *pos)
|
|
|
|
{
|
|
|
|
void *p = NULL;
|
|
|
|
|
|
|
|
mutex_lock(&graph_lock);
|
|
|
|
|
2009-02-20 04:13:12 +08:00
|
|
|
/* Nothing, tell g_show to print all functions are enabled */
|
|
|
|
if (!ftrace_graph_count && !*pos)
|
|
|
|
return (void *)1;
|
|
|
|
|
2008-12-04 04:36:57 +08:00
|
|
|
p = g_next(m, p, pos);
|
|
|
|
|
|
|
|
return p;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void g_stop(struct seq_file *m, void *p)
|
|
|
|
{
|
|
|
|
mutex_unlock(&graph_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int g_show(struct seq_file *m, void *v)
|
|
|
|
{
|
|
|
|
unsigned long *ptr = v;
|
|
|
|
char str[KSYM_SYMBOL_LEN];
|
|
|
|
|
|
|
|
if (!ptr)
|
|
|
|
return 0;
|
|
|
|
|
2009-02-20 04:13:12 +08:00
|
|
|
if (ptr == (unsigned long *)1) {
|
|
|
|
seq_printf(m, "#### all functions enabled ####\n");
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-12-04 04:36:57 +08:00
|
|
|
kallsyms_lookup(*ptr, NULL, NULL, NULL, str);
|
|
|
|
|
|
|
|
seq_printf(m, "%s\n", str);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct seq_operations ftrace_graph_seq_ops = {
|
|
|
|
.start = g_start,
|
|
|
|
.next = g_next,
|
|
|
|
.stop = g_stop,
|
|
|
|
.show = g_show,
|
|
|
|
};
|
|
|
|
|
|
|
|
static int
|
|
|
|
ftrace_graph_open(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
if (unlikely(ftrace_disabled))
|
|
|
|
return -ENODEV;
|
|
|
|
|
|
|
|
mutex_lock(&graph_lock);
|
|
|
|
if ((file->f_mode & FMODE_WRITE) &&
|
|
|
|
!(file->f_flags & O_APPEND)) {
|
|
|
|
ftrace_graph_count = 0;
|
|
|
|
memset(ftrace_graph_funcs, 0, sizeof(ftrace_graph_funcs));
|
|
|
|
}
|
|
|
|
|
|
|
|
if (file->f_mode & FMODE_READ) {
|
|
|
|
ret = seq_open(file, &ftrace_graph_seq_ops);
|
|
|
|
if (!ret) {
|
|
|
|
struct seq_file *m = file->private_data;
|
|
|
|
m->private = ftrace_graph_funcs;
|
|
|
|
}
|
|
|
|
} else
|
|
|
|
file->private_data = ftrace_graph_funcs;
|
|
|
|
mutex_unlock(&graph_lock);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
2009-02-20 04:13:12 +08:00
|
|
|
ftrace_set_func(unsigned long *array, int *idx, char *buffer)
|
2008-12-04 04:36:57 +08:00
|
|
|
{
|
|
|
|
struct dyn_ftrace *rec;
|
|
|
|
struct ftrace_page *pg;
|
2009-02-20 04:13:12 +08:00
|
|
|
int search_len;
|
2008-12-04 04:36:57 +08:00
|
|
|
int found = 0;
|
2009-02-20 04:13:12 +08:00
|
|
|
int type, not;
|
|
|
|
char *search;
|
|
|
|
bool exists;
|
|
|
|
int i;
|
2008-12-04 04:36:57 +08:00
|
|
|
|
|
|
|
if (ftrace_disabled)
|
|
|
|
return -ENODEV;
|
|
|
|
|
2009-02-20 04:13:12 +08:00
|
|
|
/* decode regex */
|
|
|
|
type = ftrace_setup_glob(buffer, strlen(buffer), &search, ¬);
|
|
|
|
if (not)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
search_len = strlen(search);
|
|
|
|
|
2009-02-14 14:15:39 +08:00
|
|
|
mutex_lock(&ftrace_lock);
|
2009-02-14 01:43:56 +08:00
|
|
|
do_for_each_ftrace_rec(pg, rec) {
|
|
|
|
|
2009-02-20 04:13:12 +08:00
|
|
|
if (*idx >= FTRACE_GRAPH_MAX_FUNCS)
|
|
|
|
break;
|
|
|
|
|
2009-02-14 01:43:56 +08:00
|
|
|
if (rec->flags & (FTRACE_FL_FAILED | FTRACE_FL_FREE))
|
|
|
|
continue;
|
|
|
|
|
2009-02-20 04:13:12 +08:00
|
|
|
if (ftrace_match_record(rec, search, search_len, type)) {
|
|
|
|
/* ensure it is not already in the array */
|
|
|
|
exists = false;
|
|
|
|
for (i = 0; i < *idx; i++)
|
|
|
|
if (array[i] == rec->ip) {
|
|
|
|
exists = true;
|
2009-02-14 01:43:56 +08:00
|
|
|
break;
|
|
|
|
}
|
2009-02-20 04:13:12 +08:00
|
|
|
if (!exists) {
|
|
|
|
array[(*idx)++] = rec->ip;
|
|
|
|
found = 1;
|
|
|
|
}
|
2008-12-04 04:36:57 +08:00
|
|
|
}
|
2009-02-14 01:43:56 +08:00
|
|
|
} while_for_each_ftrace_rec();
|
2009-02-20 04:13:12 +08:00
|
|
|
|
2009-02-14 14:15:39 +08:00
|
|
|
mutex_unlock(&ftrace_lock);
|
2008-12-04 04:36:57 +08:00
|
|
|
|
|
|
|
return found ? 0 : -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t
|
|
|
|
ftrace_graph_write(struct file *file, const char __user *ubuf,
|
|
|
|
size_t cnt, loff_t *ppos)
|
|
|
|
{
|
|
|
|
unsigned char buffer[FTRACE_BUFF_MAX+1];
|
|
|
|
unsigned long *array;
|
|
|
|
size_t read = 0;
|
|
|
|
ssize_t ret;
|
|
|
|
int index = 0;
|
|
|
|
char ch;
|
|
|
|
|
|
|
|
if (!cnt || cnt < 0)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
mutex_lock(&graph_lock);
|
|
|
|
|
|
|
|
if (ftrace_graph_count >= FTRACE_GRAPH_MAX_FUNCS) {
|
|
|
|
ret = -EBUSY;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (file->f_mode & FMODE_READ) {
|
|
|
|
struct seq_file *m = file->private_data;
|
|
|
|
array = m->private;
|
|
|
|
} else
|
|
|
|
array = file->private_data;
|
|
|
|
|
|
|
|
ret = get_user(ch, ubuf++);
|
|
|
|
if (ret)
|
|
|
|
goto out;
|
|
|
|
read++;
|
|
|
|
cnt--;
|
|
|
|
|
|
|
|
/* skip white space */
|
|
|
|
while (cnt && isspace(ch)) {
|
|
|
|
ret = get_user(ch, ubuf++);
|
|
|
|
if (ret)
|
|
|
|
goto out;
|
|
|
|
read++;
|
|
|
|
cnt--;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (isspace(ch)) {
|
|
|
|
*ppos += read;
|
|
|
|
ret = read;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
while (cnt && !isspace(ch)) {
|
|
|
|
if (index < FTRACE_BUFF_MAX)
|
|
|
|
buffer[index++] = ch;
|
|
|
|
else {
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
ret = get_user(ch, ubuf++);
|
|
|
|
if (ret)
|
|
|
|
goto out;
|
|
|
|
read++;
|
|
|
|
cnt--;
|
|
|
|
}
|
|
|
|
buffer[index] = 0;
|
|
|
|
|
2009-02-20 04:13:12 +08:00
|
|
|
/* we allow only one expression at a time */
|
|
|
|
ret = ftrace_set_func(array, &ftrace_graph_count, buffer);
|
2008-12-04 04:36:57 +08:00
|
|
|
if (ret)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
file->f_pos += read;
|
|
|
|
|
|
|
|
ret = read;
|
|
|
|
out:
|
|
|
|
mutex_unlock(&graph_lock);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct file_operations ftrace_graph_fops = {
|
|
|
|
.open = ftrace_graph_open,
|
2009-03-13 17:47:23 +08:00
|
|
|
.read = seq_read,
|
2008-12-04 04:36:57 +08:00
|
|
|
.write = ftrace_graph_write,
|
|
|
|
};
|
|
|
|
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
|
|
|
|
|
2008-11-26 13:16:23 +08:00
|
|
|
static __init int ftrace_init_dyn_debugfs(struct dentry *d_tracer)
|
2008-05-13 03:20:43 +08:00
|
|
|
{
|
|
|
|
struct dentry *entry;
|
|
|
|
|
|
|
|
entry = debugfs_create_file("available_filter_functions", 0444,
|
|
|
|
d_tracer, NULL, &ftrace_avail_fops);
|
|
|
|
if (!entry)
|
|
|
|
pr_warning("Could not create debugfs "
|
|
|
|
"'available_filter_functions' entry\n");
|
|
|
|
|
2008-06-02 00:17:54 +08:00
|
|
|
entry = debugfs_create_file("failures", 0444,
|
|
|
|
d_tracer, NULL, &ftrace_failures_fops);
|
|
|
|
if (!entry)
|
|
|
|
pr_warning("Could not create debugfs 'failures' entry\n");
|
|
|
|
|
2008-05-13 03:20:43 +08:00
|
|
|
entry = debugfs_create_file("set_ftrace_filter", 0644, d_tracer,
|
|
|
|
NULL, &ftrace_filter_fops);
|
|
|
|
if (!entry)
|
|
|
|
pr_warning("Could not create debugfs "
|
|
|
|
"'set_ftrace_filter' entry\n");
|
2008-05-22 23:46:33 +08:00
|
|
|
|
|
|
|
entry = debugfs_create_file("set_ftrace_notrace", 0644, d_tracer,
|
|
|
|
NULL, &ftrace_notrace_fops);
|
|
|
|
if (!entry)
|
|
|
|
pr_warning("Could not create debugfs "
|
|
|
|
"'set_ftrace_notrace' entry\n");
|
ftrace: user update and disable dynamic ftrace daemon
In dynamic ftrace, the mcount function starts off pointing to a stub
function that just returns.
On start up, the call to the stub is modified to point to a "record_ip"
function. The job of the record_ip function is to add the function to
a pre-allocated hash list. If the function is already there, it simply is
ignored, otherwise it is added to the list.
Later, a ftraced daemon wakes up and calls kstop_machine if any functions
have been recorded, and changes the calls to the recorded functions to
a simple nop. If no functions were recorded, the daemon goes back to sleep.
The daemon wakes up once a second to see if it needs to update any newly
recorded functions into nops. Usually it does not, but if a lot of code
has been executed for the first time in the kernel, the ftraced daemon
will call kstop_machine to update those into nops.
The problem currently is that there's no way to stop the daemon from doing
this, and it can cause unneeded latencies (800us which for some is bothersome).
This patch adds a new file /debugfs/tracing/ftraced_enabled. If the daemon
is active, reading this will return "enabled\n" and "disabled\n" when the
daemon is not running. To disable the daemon, the user can echo "0" or
"disable" into this file, and "1" or "enable" to re-enable the daemon.
Since the daemon is used to convert the functions into nops to increase
the performance of the system, I also added that anytime something is
written into the ftraced_enabled file, kstop_machine will run if there
are new functions that have been detected that need to be converted.
This way the user can disable the daemon but still be able to control the
conversion of the mcount calls to nops by simply,
"echo 0 > /debugfs/tracing/ftraced_enabled"
when they need to do more conversions.
To see the number of converted functions:
"cat /debugfs/tracing/dyn_ftrace_total_info"
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-05-28 08:48:37 +08:00
|
|
|
|
2008-12-04 04:36:57 +08:00
|
|
|
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
|
|
|
entry = debugfs_create_file("set_graph_function", 0444, d_tracer,
|
|
|
|
NULL,
|
|
|
|
&ftrace_graph_fops);
|
|
|
|
if (!entry)
|
|
|
|
pr_warning("Could not create debugfs "
|
|
|
|
"'set_graph_function' entry\n");
|
|
|
|
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
|
|
|
|
|
2008-05-13 03:20:43 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-11-15 08:21:19 +08:00
|
|
|
static int ftrace_convert_nops(struct module *mod,
|
|
|
|
unsigned long *start,
|
2008-08-15 03:45:08 +08:00
|
|
|
unsigned long *end)
|
|
|
|
{
|
|
|
|
unsigned long *p;
|
|
|
|
unsigned long addr;
|
|
|
|
unsigned long flags;
|
|
|
|
|
2009-02-14 14:42:44 +08:00
|
|
|
mutex_lock(&ftrace_lock);
|
2008-08-15 03:45:08 +08:00
|
|
|
p = start;
|
|
|
|
while (p < end) {
|
|
|
|
addr = ftrace_call_adjust(*p++);
|
2008-11-15 08:21:19 +08:00
|
|
|
/*
|
|
|
|
* Some architecture linkers will pad between
|
|
|
|
* the different mcount_loc sections of different
|
|
|
|
* object files to satisfy alignments.
|
|
|
|
* Skip any NULL pointers.
|
|
|
|
*/
|
|
|
|
if (!addr)
|
|
|
|
continue;
|
2008-08-15 03:45:08 +08:00
|
|
|
ftrace_record_ip(addr);
|
|
|
|
}
|
|
|
|
|
2008-10-23 21:33:07 +08:00
|
|
|
/* disable interrupts to prevent kstop machine */
|
2008-08-15 03:45:08 +08:00
|
|
|
local_irq_save(flags);
|
2008-11-15 08:21:19 +08:00
|
|
|
ftrace_update_code(mod);
|
2008-08-15 03:45:08 +08:00
|
|
|
local_irq_restore(flags);
|
2009-02-14 14:42:44 +08:00
|
|
|
mutex_unlock(&ftrace_lock);
|
2008-08-15 03:45:08 +08:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-11-15 08:21:19 +08:00
|
|
|
void ftrace_init_module(struct module *mod,
|
|
|
|
unsigned long *start, unsigned long *end)
|
2008-08-15 03:45:09 +08:00
|
|
|
{
|
2008-08-16 09:40:04 +08:00
|
|
|
if (ftrace_disabled || start == end)
|
2008-08-15 10:47:19 +08:00
|
|
|
return;
|
2008-11-15 08:21:19 +08:00
|
|
|
ftrace_convert_nops(mod, start, end);
|
2008-08-15 03:45:09 +08:00
|
|
|
}
|
|
|
|
|
2008-08-15 03:45:08 +08:00
|
|
|
extern unsigned long __start_mcount_loc[];
|
|
|
|
extern unsigned long __stop_mcount_loc[];
|
|
|
|
|
|
|
|
void __init ftrace_init(void)
|
|
|
|
{
|
|
|
|
unsigned long count, addr, flags;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
/* Keep the ftrace pointer to the stub */
|
|
|
|
addr = (unsigned long)ftrace_stub;
|
|
|
|
|
|
|
|
local_irq_save(flags);
|
|
|
|
ftrace_dyn_arch_init(&addr);
|
|
|
|
local_irq_restore(flags);
|
|
|
|
|
|
|
|
/* ftrace_dyn_arch_init places the return code in addr */
|
|
|
|
if (addr)
|
|
|
|
goto failed;
|
|
|
|
|
|
|
|
count = __stop_mcount_loc - __start_mcount_loc;
|
|
|
|
|
|
|
|
ret = ftrace_dyn_table_alloc(count);
|
|
|
|
if (ret)
|
|
|
|
goto failed;
|
|
|
|
|
|
|
|
last_ftrace_enabled = ftrace_enabled = 1;
|
|
|
|
|
2008-11-15 08:21:19 +08:00
|
|
|
ret = ftrace_convert_nops(NULL,
|
|
|
|
__start_mcount_loc,
|
2008-08-15 03:45:08 +08:00
|
|
|
__stop_mcount_loc);
|
|
|
|
|
|
|
|
return;
|
|
|
|
failed:
|
|
|
|
ftrace_disabled = 1;
|
|
|
|
}
|
|
|
|
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
#else
|
2008-10-29 03:17:38 +08:00
|
|
|
|
|
|
|
static int __init ftrace_nodyn_init(void)
|
|
|
|
{
|
|
|
|
ftrace_enabled = 1;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
device_initcall(ftrace_nodyn_init);
|
|
|
|
|
2008-11-26 13:16:23 +08:00
|
|
|
static inline int ftrace_init_dyn_debugfs(struct dentry *d_tracer) { return 0; }
|
|
|
|
static inline void ftrace_startup_enable(int command) { }
|
2008-11-26 13:16:24 +08:00
|
|
|
/* Keep as macros so we do not need to define the commands */
|
|
|
|
# define ftrace_startup(command) do { } while (0)
|
|
|
|
# define ftrace_shutdown(command) do { } while (0)
|
2008-05-13 03:20:45 +08:00
|
|
|
# define ftrace_startup_sysctl() do { } while (0)
|
|
|
|
# define ftrace_shutdown_sysctl() do { } while (0)
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
#endif /* CONFIG_DYNAMIC_FTRACE */
|
|
|
|
|
2008-11-26 13:16:23 +08:00
|
|
|
static ssize_t
|
|
|
|
ftrace_pid_read(struct file *file, char __user *ubuf,
|
|
|
|
size_t cnt, loff_t *ppos)
|
|
|
|
{
|
|
|
|
char buf[64];
|
|
|
|
int r;
|
|
|
|
|
2008-12-04 13:26:41 +08:00
|
|
|
if (ftrace_pid_trace == ftrace_swapper_pid)
|
|
|
|
r = sprintf(buf, "swapper tasks\n");
|
|
|
|
else if (ftrace_pid_trace)
|
2009-03-24 11:03:01 +08:00
|
|
|
r = sprintf(buf, "%u\n", pid_vnr(ftrace_pid_trace));
|
2008-11-26 13:16:23 +08:00
|
|
|
else
|
|
|
|
r = sprintf(buf, "no pid\n");
|
|
|
|
|
|
|
|
return simple_read_from_buffer(ubuf, cnt, ppos, buf, r);
|
|
|
|
}
|
|
|
|
|
2008-12-04 13:26:41 +08:00
|
|
|
static void clear_ftrace_swapper(void)
|
2008-12-04 13:26:40 +08:00
|
|
|
{
|
|
|
|
struct task_struct *p;
|
2008-12-04 13:26:41 +08:00
|
|
|
int cpu;
|
2008-12-04 13:26:40 +08:00
|
|
|
|
2008-12-04 13:26:41 +08:00
|
|
|
get_online_cpus();
|
|
|
|
for_each_online_cpu(cpu) {
|
|
|
|
p = idle_task(cpu);
|
2008-12-04 13:26:40 +08:00
|
|
|
clear_tsk_trace_trace(p);
|
2008-12-04 13:26:41 +08:00
|
|
|
}
|
|
|
|
put_online_cpus();
|
|
|
|
}
|
2008-12-04 13:26:40 +08:00
|
|
|
|
2008-12-04 13:26:41 +08:00
|
|
|
static void set_ftrace_swapper(void)
|
|
|
|
{
|
|
|
|
struct task_struct *p;
|
|
|
|
int cpu;
|
|
|
|
|
|
|
|
get_online_cpus();
|
|
|
|
for_each_online_cpu(cpu) {
|
|
|
|
p = idle_task(cpu);
|
|
|
|
set_tsk_trace_trace(p);
|
|
|
|
}
|
|
|
|
put_online_cpus();
|
2008-12-04 13:26:40 +08:00
|
|
|
}
|
|
|
|
|
2008-12-04 13:26:41 +08:00
|
|
|
static void clear_ftrace_pid(struct pid *pid)
|
|
|
|
{
|
|
|
|
struct task_struct *p;
|
|
|
|
|
2009-02-04 03:39:04 +08:00
|
|
|
rcu_read_lock();
|
2008-12-04 13:26:41 +08:00
|
|
|
do_each_pid_task(pid, PIDTYPE_PID, p) {
|
|
|
|
clear_tsk_trace_trace(p);
|
|
|
|
} while_each_pid_task(pid, PIDTYPE_PID, p);
|
2009-02-04 03:39:04 +08:00
|
|
|
rcu_read_unlock();
|
|
|
|
|
2008-12-04 13:26:41 +08:00
|
|
|
put_pid(pid);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void set_ftrace_pid(struct pid *pid)
|
2008-12-04 13:26:40 +08:00
|
|
|
{
|
|
|
|
struct task_struct *p;
|
|
|
|
|
2009-02-04 03:39:04 +08:00
|
|
|
rcu_read_lock();
|
2008-12-04 13:26:40 +08:00
|
|
|
do_each_pid_task(pid, PIDTYPE_PID, p) {
|
|
|
|
set_tsk_trace_trace(p);
|
|
|
|
} while_each_pid_task(pid, PIDTYPE_PID, p);
|
2009-02-04 03:39:04 +08:00
|
|
|
rcu_read_unlock();
|
2008-12-04 13:26:40 +08:00
|
|
|
}
|
|
|
|
|
2008-12-04 13:26:41 +08:00
|
|
|
static void clear_ftrace_pid_task(struct pid **pid)
|
|
|
|
{
|
|
|
|
if (*pid == ftrace_swapper_pid)
|
|
|
|
clear_ftrace_swapper();
|
|
|
|
else
|
|
|
|
clear_ftrace_pid(*pid);
|
|
|
|
|
|
|
|
*pid = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void set_ftrace_pid_task(struct pid *pid)
|
|
|
|
{
|
|
|
|
if (pid == ftrace_swapper_pid)
|
|
|
|
set_ftrace_swapper();
|
|
|
|
else
|
|
|
|
set_ftrace_pid(pid);
|
|
|
|
}
|
|
|
|
|
2008-11-26 13:16:23 +08:00
|
|
|
static ssize_t
|
|
|
|
ftrace_pid_write(struct file *filp, const char __user *ubuf,
|
|
|
|
size_t cnt, loff_t *ppos)
|
|
|
|
{
|
2008-12-04 13:26:40 +08:00
|
|
|
struct pid *pid;
|
2008-11-26 13:16:23 +08:00
|
|
|
char buf[64];
|
|
|
|
long val;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (cnt >= sizeof(buf))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (copy_from_user(&buf, ubuf, cnt))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
buf[cnt] = 0;
|
|
|
|
|
|
|
|
ret = strict_strtol(buf, 10, &val);
|
|
|
|
if (ret < 0)
|
|
|
|
return ret;
|
|
|
|
|
2009-02-14 14:42:44 +08:00
|
|
|
mutex_lock(&ftrace_lock);
|
2008-12-04 13:26:40 +08:00
|
|
|
if (val < 0) {
|
2008-11-26 13:16:23 +08:00
|
|
|
/* disable pid tracing */
|
2008-12-04 13:26:40 +08:00
|
|
|
if (!ftrace_pid_trace)
|
2008-11-26 13:16:23 +08:00
|
|
|
goto out;
|
2008-12-04 13:26:40 +08:00
|
|
|
|
|
|
|
clear_ftrace_pid_task(&ftrace_pid_trace);
|
2008-11-26 13:16:23 +08:00
|
|
|
|
|
|
|
} else {
|
2008-12-04 13:26:41 +08:00
|
|
|
/* swapper task is special */
|
|
|
|
if (!val) {
|
|
|
|
pid = ftrace_swapper_pid;
|
|
|
|
if (pid == ftrace_pid_trace)
|
|
|
|
goto out;
|
|
|
|
} else {
|
|
|
|
pid = find_get_pid(val);
|
2008-11-26 13:16:23 +08:00
|
|
|
|
2008-12-04 13:26:41 +08:00
|
|
|
if (pid == ftrace_pid_trace) {
|
|
|
|
put_pid(pid);
|
|
|
|
goto out;
|
|
|
|
}
|
2008-12-04 04:36:58 +08:00
|
|
|
}
|
|
|
|
|
2008-12-04 13:26:40 +08:00
|
|
|
if (ftrace_pid_trace)
|
|
|
|
clear_ftrace_pid_task(&ftrace_pid_trace);
|
|
|
|
|
|
|
|
if (!pid)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
ftrace_pid_trace = pid;
|
|
|
|
|
|
|
|
set_ftrace_pid_task(ftrace_pid_trace);
|
2008-11-26 13:16:23 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* update the function call */
|
|
|
|
ftrace_update_pid_func();
|
|
|
|
ftrace_startup_enable(0);
|
|
|
|
|
|
|
|
out:
|
2009-02-14 14:42:44 +08:00
|
|
|
mutex_unlock(&ftrace_lock);
|
2008-11-26 13:16:23 +08:00
|
|
|
|
|
|
|
return cnt;
|
|
|
|
}
|
|
|
|
|
2009-03-06 10:44:55 +08:00
|
|
|
static const struct file_operations ftrace_pid_fops = {
|
2008-11-26 13:16:23 +08:00
|
|
|
.read = ftrace_pid_read,
|
|
|
|
.write = ftrace_pid_write,
|
|
|
|
};
|
|
|
|
|
|
|
|
static __init int ftrace_init_debugfs(void)
|
|
|
|
{
|
|
|
|
struct dentry *d_tracer;
|
|
|
|
struct dentry *entry;
|
|
|
|
|
|
|
|
d_tracer = tracing_init_dentry();
|
|
|
|
if (!d_tracer)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
ftrace_init_dyn_debugfs(d_tracer);
|
|
|
|
|
|
|
|
entry = debugfs_create_file("set_ftrace_pid", 0644, d_tracer,
|
|
|
|
NULL, &ftrace_pid_fops);
|
|
|
|
if (!entry)
|
|
|
|
pr_warning("Could not create debugfs "
|
|
|
|
"'set_ftrace_pid' entry\n");
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
fs_initcall(ftrace_init_debugfs);
|
|
|
|
|
2008-07-11 08:58:15 +08:00
|
|
|
/**
|
2008-10-23 21:33:02 +08:00
|
|
|
* ftrace_kill - kill ftrace
|
2008-07-11 08:58:15 +08:00
|
|
|
*
|
|
|
|
* This function should be used by panic code. It stops ftrace
|
|
|
|
* but in a not so nice way. If you need to simply kill ftrace
|
|
|
|
* from a non-atomic section, use ftrace_kill.
|
|
|
|
*/
|
2008-10-23 21:33:02 +08:00
|
|
|
void ftrace_kill(void)
|
2008-07-11 08:58:15 +08:00
|
|
|
{
|
|
|
|
ftrace_disabled = 1;
|
|
|
|
ftrace_enabled = 0;
|
|
|
|
clear_ftrace_function();
|
|
|
|
}
|
|
|
|
|
2008-05-13 03:20:42 +08:00
|
|
|
/**
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
* register_ftrace_function - register a function for profiling
|
|
|
|
* @ops - ops structure that holds the function for profiling.
|
2008-05-13 03:20:42 +08:00
|
|
|
*
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
* Register a function to be called by all functions in the
|
|
|
|
* kernel.
|
|
|
|
*
|
|
|
|
* Note: @ops->func and all the functions it calls must be labeled
|
|
|
|
* with "notrace", otherwise it will go into a
|
|
|
|
* recursive loop.
|
2008-05-13 03:20:42 +08:00
|
|
|
*/
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
int register_ftrace_function(struct ftrace_ops *ops)
|
2008-05-13 03:20:42 +08:00
|
|
|
{
|
2008-05-13 03:20:43 +08:00
|
|
|
int ret;
|
|
|
|
|
2008-05-13 03:20:48 +08:00
|
|
|
if (unlikely(ftrace_disabled))
|
|
|
|
return -1;
|
|
|
|
|
2009-02-14 14:42:44 +08:00
|
|
|
mutex_lock(&ftrace_lock);
|
2008-11-16 13:02:06 +08:00
|
|
|
|
2008-05-13 03:20:43 +08:00
|
|
|
ret = __register_ftrace_function(ops);
|
2008-11-26 13:16:24 +08:00
|
|
|
ftrace_startup(0);
|
2008-05-13 03:20:43 +08:00
|
|
|
|
2009-02-14 14:42:44 +08:00
|
|
|
mutex_unlock(&ftrace_lock);
|
2008-05-13 03:20:43 +08:00
|
|
|
return ret;
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2009-01-13 06:35:50 +08:00
|
|
|
* unregister_ftrace_function - unregister a function for profiling.
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
* @ops - ops structure that holds the function to unregister
|
|
|
|
*
|
|
|
|
* Unregister a function that was added to be called by ftrace profiling.
|
|
|
|
*/
|
|
|
|
int unregister_ftrace_function(struct ftrace_ops *ops)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
2009-02-14 14:42:44 +08:00
|
|
|
mutex_lock(&ftrace_lock);
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
ret = __unregister_ftrace_function(ops);
|
2008-11-26 13:16:24 +08:00
|
|
|
ftrace_shutdown(0);
|
2009-02-14 14:42:44 +08:00
|
|
|
mutex_unlock(&ftrace_lock);
|
2008-05-13 03:20:43 +08:00
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2008-05-13 03:20:51 +08:00
|
|
|
int
|
2008-05-13 03:20:43 +08:00
|
|
|
ftrace_enable_sysctl(struct ctl_table *table, int write,
|
2008-05-13 03:20:43 +08:00
|
|
|
struct file *file, void __user *buffer, size_t *lenp,
|
2008-05-13 03:20:43 +08:00
|
|
|
loff_t *ppos)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
2008-05-13 03:20:48 +08:00
|
|
|
if (unlikely(ftrace_disabled))
|
|
|
|
return -ENODEV;
|
|
|
|
|
2009-02-14 14:42:44 +08:00
|
|
|
mutex_lock(&ftrace_lock);
|
2008-05-13 03:20:43 +08:00
|
|
|
|
2008-05-13 03:20:43 +08:00
|
|
|
ret = proc_dointvec(table, write, file, buffer, lenp, ppos);
|
2008-05-13 03:20:43 +08:00
|
|
|
|
|
|
|
if (ret || !write || (last_ftrace_enabled == ftrace_enabled))
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
last_ftrace_enabled = ftrace_enabled;
|
|
|
|
|
|
|
|
if (ftrace_enabled) {
|
|
|
|
|
|
|
|
ftrace_startup_sysctl();
|
|
|
|
|
|
|
|
/* we are starting ftrace again */
|
|
|
|
if (ftrace_list != &ftrace_list_end) {
|
|
|
|
if (ftrace_list->next == &ftrace_list_end)
|
|
|
|
ftrace_trace_function = ftrace_list->func;
|
|
|
|
else
|
|
|
|
ftrace_trace_function = ftrace_list_func;
|
|
|
|
}
|
|
|
|
|
|
|
|
} else {
|
|
|
|
/* stopping ftrace calls (just send to ftrace_stub) */
|
|
|
|
ftrace_trace_function = ftrace_stub;
|
|
|
|
|
|
|
|
ftrace_shutdown_sysctl();
|
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
2009-02-14 14:42:44 +08:00
|
|
|
mutex_unlock(&ftrace_lock);
|
ftrace: dynamic enabling/disabling of function calls
This patch adds a feature to dynamically replace the ftrace code
with the jmps to allow a kernel with ftrace configured to run
as fast as it can without it configured.
The way this works, is on bootup (if ftrace is enabled), a ftrace
function is registered to record the instruction pointer of all
places that call the function.
Later, if there's still any code to patch, a kthread is awoken
(rate limited to at most once a second) that performs a stop_machine,
and replaces all the code that was called with a jmp over the call
to ftrace. It only replaces what was found the previous time. Typically
the system reaches equilibrium quickly after bootup and there's no code
patching needed at all.
e.g.
call ftrace /* 5 bytes */
is replaced with
jmp 3f /* jmp is 2 bytes and we jump 3 forward */
3:
When we want to enable ftrace for function tracing, the IP recording
is removed, and stop_machine is called again to replace all the locations
of that were recorded back to the call of ftrace. When it is disabled,
we replace the code back to the jmp.
Allocation is done by the kthread. If the ftrace recording function is
called, and we don't have any record slots available, then we simply
skip that call. Once a second a new page (if needed) is allocated for
recording new ftrace function calls. A large batch is allocated at
boot up to get most of the calls there.
Because we do this via stop_machine, we don't have to worry about another
CPU executing a ftrace call as we modify it. But we do need to worry
about NMI's so all functions that might be called via nmi must be
annotated with notrace_nmi. When this code is configured in, the NMI code
will not call notrace.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-13 03:20:42 +08:00
|
|
|
return ret;
|
2008-05-13 03:20:42 +08:00
|
|
|
}
|
2008-10-24 18:47:10 +08:00
|
|
|
|
2008-11-26 04:07:04 +08:00
|
|
|
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
2008-11-16 13:02:06 +08:00
|
|
|
|
2008-11-26 07:57:25 +08:00
|
|
|
static atomic_t ftrace_graph_active;
|
2009-01-15 05:33:27 +08:00
|
|
|
static struct notifier_block ftrace_suspend_notifier;
|
2008-11-16 13:02:06 +08:00
|
|
|
|
2008-12-03 12:50:05 +08:00
|
|
|
int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-11-26 07:57:25 +08:00
|
|
|
/* The callbacks that hook a function */
|
|
|
|
trace_func_graph_ret_t ftrace_graph_return =
|
|
|
|
(trace_func_graph_ret_t)ftrace_stub;
|
2008-12-03 12:50:05 +08:00
|
|
|
trace_func_graph_ent_t ftrace_graph_entry = ftrace_graph_entry_stub;
|
2008-11-23 13:22:56 +08:00
|
|
|
|
|
|
|
/* Try to assign a return stack array on FTRACE_RETSTACK_ALLOC_SIZE tasks. */
|
|
|
|
static int alloc_retstack_tasklist(struct ftrace_ret_stack **ret_stack_list)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
int ret = 0;
|
|
|
|
unsigned long flags;
|
|
|
|
int start = 0, end = FTRACE_RETSTACK_ALLOC_SIZE;
|
|
|
|
struct task_struct *g, *t;
|
|
|
|
|
|
|
|
for (i = 0; i < FTRACE_RETSTACK_ALLOC_SIZE; i++) {
|
|
|
|
ret_stack_list[i] = kmalloc(FTRACE_RETFUNC_DEPTH
|
|
|
|
* sizeof(struct ftrace_ret_stack),
|
|
|
|
GFP_KERNEL);
|
|
|
|
if (!ret_stack_list[i]) {
|
|
|
|
start = 0;
|
|
|
|
end = i;
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto free;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
read_lock_irqsave(&tasklist_lock, flags);
|
|
|
|
do_each_thread(g, t) {
|
|
|
|
if (start == end) {
|
|
|
|
ret = -EAGAIN;
|
|
|
|
goto unlock;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (t->ret_stack == NULL) {
|
|
|
|
t->curr_ret_stack = -1;
|
2008-12-02 07:20:39 +08:00
|
|
|
/* Make sure IRQs see the -1 first: */
|
|
|
|
barrier();
|
|
|
|
t->ret_stack = ret_stack_list[start++];
|
2008-12-06 10:43:41 +08:00
|
|
|
atomic_set(&t->tracing_graph_pause, 0);
|
2008-11-23 13:22:56 +08:00
|
|
|
atomic_set(&t->trace_overrun, 0);
|
|
|
|
}
|
|
|
|
} while_each_thread(g, t);
|
|
|
|
|
|
|
|
unlock:
|
|
|
|
read_unlock_irqrestore(&tasklist_lock, flags);
|
|
|
|
free:
|
|
|
|
for (i = start; i < end; i++)
|
|
|
|
kfree(ret_stack_list[i]);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2009-03-24 13:10:15 +08:00
|
|
|
static void
|
|
|
|
ftrace_graph_probe_sched_switch(struct rq *__rq, struct task_struct *prev,
|
|
|
|
struct task_struct *next)
|
|
|
|
{
|
|
|
|
unsigned long long timestamp;
|
|
|
|
int index;
|
|
|
|
|
2009-03-24 23:06:24 +08:00
|
|
|
/*
|
|
|
|
* Does the user want to count the time a function was asleep.
|
|
|
|
* If so, do not update the time stamps.
|
|
|
|
*/
|
|
|
|
if (trace_flags & TRACE_ITER_SLEEP_TIME)
|
|
|
|
return;
|
|
|
|
|
2009-03-24 13:10:15 +08:00
|
|
|
timestamp = trace_clock_local();
|
|
|
|
|
|
|
|
prev->ftrace_timestamp = timestamp;
|
|
|
|
|
|
|
|
/* only process tasks that we timestamped */
|
|
|
|
if (!next->ftrace_timestamp)
|
|
|
|
return;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Update all the counters in next to make up for the
|
|
|
|
* time next was sleeping.
|
|
|
|
*/
|
|
|
|
timestamp -= next->ftrace_timestamp;
|
|
|
|
|
|
|
|
for (index = next->curr_ret_stack; index >= 0; index--)
|
|
|
|
next->ret_stack[index].calltime += timestamp;
|
|
|
|
}
|
|
|
|
|
2008-11-23 13:22:56 +08:00
|
|
|
/* Allocate a return stack for each task */
|
2008-11-26 04:07:04 +08:00
|
|
|
static int start_graph_tracing(void)
|
2008-11-23 13:22:56 +08:00
|
|
|
{
|
|
|
|
struct ftrace_ret_stack **ret_stack_list;
|
2009-02-18 01:35:34 +08:00
|
|
|
int ret, cpu;
|
2008-11-23 13:22:56 +08:00
|
|
|
|
|
|
|
ret_stack_list = kmalloc(FTRACE_RETSTACK_ALLOC_SIZE *
|
|
|
|
sizeof(struct ftrace_ret_stack *),
|
|
|
|
GFP_KERNEL);
|
|
|
|
|
|
|
|
if (!ret_stack_list)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2009-02-18 01:35:34 +08:00
|
|
|
/* The cpu_boot init_task->ret_stack will never be freed */
|
|
|
|
for_each_online_cpu(cpu)
|
|
|
|
ftrace_graph_init_task(idle_task(cpu));
|
|
|
|
|
2008-11-23 13:22:56 +08:00
|
|
|
do {
|
|
|
|
ret = alloc_retstack_tasklist(ret_stack_list);
|
|
|
|
} while (ret == -EAGAIN);
|
|
|
|
|
2009-03-24 13:10:15 +08:00
|
|
|
if (!ret) {
|
|
|
|
ret = register_trace_sched_switch(ftrace_graph_probe_sched_switch);
|
|
|
|
if (ret)
|
|
|
|
pr_info("ftrace_graph: Couldn't activate tracepoint"
|
|
|
|
" probe to kernel_sched_switch\n");
|
|
|
|
}
|
|
|
|
|
2008-11-23 13:22:56 +08:00
|
|
|
kfree(ret_stack_list);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2009-01-15 05:33:27 +08:00
|
|
|
/*
|
|
|
|
* Hibernation protection.
|
|
|
|
* The state of the current task is too much unstable during
|
|
|
|
* suspend/restore to disk. We want to protect against that.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
ftrace_suspend_notifier_call(struct notifier_block *bl, unsigned long state,
|
|
|
|
void *unused)
|
|
|
|
{
|
|
|
|
switch (state) {
|
|
|
|
case PM_HIBERNATION_PREPARE:
|
|
|
|
pause_graph_tracing();
|
|
|
|
break;
|
|
|
|
|
|
|
|
case PM_POST_HIBERNATION:
|
|
|
|
unpause_graph_tracing();
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
return NOTIFY_DONE;
|
|
|
|
}
|
|
|
|
|
2008-11-26 07:57:25 +08:00
|
|
|
int register_ftrace_graph(trace_func_graph_ret_t retfunc,
|
|
|
|
trace_func_graph_ent_t entryfunc)
|
2008-11-11 14:14:25 +08:00
|
|
|
{
|
2008-11-16 13:02:06 +08:00
|
|
|
int ret = 0;
|
|
|
|
|
2009-02-14 14:42:44 +08:00
|
|
|
mutex_lock(&ftrace_lock);
|
2008-11-16 13:02:06 +08:00
|
|
|
|
2009-03-24 12:18:31 +08:00
|
|
|
/* we currently allow only one tracer registered at a time */
|
|
|
|
if (atomic_read(&ftrace_graph_active)) {
|
|
|
|
ret = -EBUSY;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2009-01-15 05:33:27 +08:00
|
|
|
ftrace_suspend_notifier.notifier_call = ftrace_suspend_notifier_call;
|
|
|
|
register_pm_notifier(&ftrace_suspend_notifier);
|
|
|
|
|
2008-11-26 07:57:25 +08:00
|
|
|
atomic_inc(&ftrace_graph_active);
|
2008-11-26 04:07:04 +08:00
|
|
|
ret = start_graph_tracing();
|
2008-11-23 13:22:56 +08:00
|
|
|
if (ret) {
|
2008-11-26 07:57:25 +08:00
|
|
|
atomic_dec(&ftrace_graph_active);
|
2008-11-23 13:22:56 +08:00
|
|
|
goto out;
|
|
|
|
}
|
2008-11-26 13:16:25 +08:00
|
|
|
|
2008-11-26 07:57:25 +08:00
|
|
|
ftrace_graph_return = retfunc;
|
|
|
|
ftrace_graph_entry = entryfunc;
|
2008-11-26 13:16:25 +08:00
|
|
|
|
2008-11-26 13:16:24 +08:00
|
|
|
ftrace_startup(FTRACE_START_FUNC_RET);
|
2008-11-16 13:02:06 +08:00
|
|
|
|
|
|
|
out:
|
2009-02-14 14:42:44 +08:00
|
|
|
mutex_unlock(&ftrace_lock);
|
2008-11-16 13:02:06 +08:00
|
|
|
return ret;
|
2008-11-11 14:14:25 +08:00
|
|
|
}
|
|
|
|
|
2008-11-26 04:07:04 +08:00
|
|
|
void unregister_ftrace_graph(void)
|
2008-11-11 14:14:25 +08:00
|
|
|
{
|
2009-02-14 14:42:44 +08:00
|
|
|
mutex_lock(&ftrace_lock);
|
2008-11-16 13:02:06 +08:00
|
|
|
|
2009-03-30 23:11:28 +08:00
|
|
|
if (!unlikely(atomic_read(&ftrace_graph_active)))
|
|
|
|
goto out;
|
|
|
|
|
2008-11-26 07:57:25 +08:00
|
|
|
atomic_dec(&ftrace_graph_active);
|
2009-03-24 13:10:15 +08:00
|
|
|
unregister_trace_sched_switch(ftrace_graph_probe_sched_switch);
|
2008-11-26 07:57:25 +08:00
|
|
|
ftrace_graph_return = (trace_func_graph_ret_t)ftrace_stub;
|
2008-12-03 12:50:05 +08:00
|
|
|
ftrace_graph_entry = ftrace_graph_entry_stub;
|
2008-11-26 13:16:24 +08:00
|
|
|
ftrace_shutdown(FTRACE_STOP_FUNC_RET);
|
2009-01-15 05:33:27 +08:00
|
|
|
unregister_pm_notifier(&ftrace_suspend_notifier);
|
2008-11-16 13:02:06 +08:00
|
|
|
|
2009-03-30 23:11:28 +08:00
|
|
|
out:
|
2009-02-14 14:42:44 +08:00
|
|
|
mutex_unlock(&ftrace_lock);
|
2008-11-11 14:14:25 +08:00
|
|
|
}
|
2008-11-23 13:22:56 +08:00
|
|
|
|
|
|
|
/* Allocate a return stack for newly created task */
|
2008-11-26 04:07:04 +08:00
|
|
|
void ftrace_graph_init_task(struct task_struct *t)
|
2008-11-23 13:22:56 +08:00
|
|
|
{
|
2008-11-26 07:57:25 +08:00
|
|
|
if (atomic_read(&ftrace_graph_active)) {
|
2008-11-23 13:22:56 +08:00
|
|
|
t->ret_stack = kmalloc(FTRACE_RETFUNC_DEPTH
|
|
|
|
* sizeof(struct ftrace_ret_stack),
|
|
|
|
GFP_KERNEL);
|
|
|
|
if (!t->ret_stack)
|
|
|
|
return;
|
|
|
|
t->curr_ret_stack = -1;
|
2008-12-06 10:43:41 +08:00
|
|
|
atomic_set(&t->tracing_graph_pause, 0);
|
2008-11-23 13:22:56 +08:00
|
|
|
atomic_set(&t->trace_overrun, 0);
|
2009-03-24 13:10:15 +08:00
|
|
|
t->ftrace_timestamp = 0;
|
2008-11-23 13:22:56 +08:00
|
|
|
} else
|
|
|
|
t->ret_stack = NULL;
|
|
|
|
}
|
|
|
|
|
2008-11-26 04:07:04 +08:00
|
|
|
void ftrace_graph_exit_task(struct task_struct *t)
|
2008-11-23 13:22:56 +08:00
|
|
|
{
|
2008-11-24 00:33:12 +08:00
|
|
|
struct ftrace_ret_stack *ret_stack = t->ret_stack;
|
|
|
|
|
2008-11-23 13:22:56 +08:00
|
|
|
t->ret_stack = NULL;
|
2008-11-24 00:33:12 +08:00
|
|
|
/* NULL must become visible to IRQs before we free it: */
|
|
|
|
barrier();
|
|
|
|
|
|
|
|
kfree(ret_stack);
|
2008-11-23 13:22:56 +08:00
|
|
|
}
|
2008-12-03 12:50:02 +08:00
|
|
|
|
|
|
|
void ftrace_graph_stop(void)
|
|
|
|
{
|
|
|
|
ftrace_stop();
|
|
|
|
}
|
2008-11-11 14:14:25 +08:00
|
|
|
#endif
|
|
|
|
|