diff options
author | Song Chen <chensong_2000@189.cn> | 2022-12-30 09:33:38 +0300 |
---|---|---|
committer | Masami Hiramatsu (Google) <mhiramat@kernel.org> | 2023-02-24 03:44:27 +0300 |
commit | 672a2bf84061f0f19acfc5869f5b3689759a55a8 (patch) | |
tree | 139c25b500f03b69c92c62c2b1352a2da740583c /kernel/trace/trace_events_synth.c | |
parent | 196b6389a363e0d7e6b6f2654b9889f9c821b9d3 (diff) | |
download | linux-672a2bf84061f0f19acfc5869f5b3689759a55a8.tar.xz |
kernel/trace: Provide default impelentations defined in trace_probe_tmpl.h
There are 6 function definitions in trace_probe_tmpl.h, they are:
1, fetch_store_strlen
2, fetch_store_string
3, fetch_store_strlen_user
4, fetch_store_string_user
5, probe_mem_read
6, probe_mem_read_user
Every C file which includes trace_probe_tmpl.h has to implement them,
otherwise it gets warnings and errors. However, some of them are identical,
like kprobe and eprobe, as a result, there is a lot redundant code in those
2 files.
This patch would like to provide default behaviors for those functions
which kprobe and eprobe can share by just including trace_probe_kernel.h
with trace_probe_tmpl.h together.
It removes redundant code, increases readability, and more importantly,
makes it easier to introduce a new feature based on trace probe
(it's possible).
Link: https://lore.kernel.org/all/1672382018-18347-1-git-send-email-chensong_2000@189.cn/
Signed-off-by: Song Chen <chensong_2000@189.cn>
Reported-by: kernel test robot <lkp@intel.com>
Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Diffstat (limited to 'kernel/trace/trace_events_synth.c')
-rw-r--r-- | kernel/trace/trace_events_synth.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/kernel/trace/trace_events_synth.c b/kernel/trace/trace_events_synth.c index 67592eed0be8..76590f50930c 100644 --- a/kernel/trace/trace_events_synth.c +++ b/kernel/trace/trace_events_synth.c @@ -420,12 +420,12 @@ static unsigned int trace_string(struct synth_trace_event *entry, data_offset += event->n_u64 * sizeof(u64); data_offset += data_size; - len = kern_fetch_store_strlen((unsigned long)str_val); + len = fetch_store_strlen((unsigned long)str_val); data_offset |= len << 16; *(u32 *)&entry->fields[*n_u64] = data_offset; - ret = kern_fetch_store_string((unsigned long)str_val, &entry->fields[*n_u64], entry); + ret = fetch_store_string((unsigned long)str_val, &entry->fields[*n_u64], entry); (*n_u64)++; } else { @@ -473,7 +473,7 @@ static notrace void trace_event_raw_event_synth(void *__data, val_idx = var_ref_idx[field_pos]; str_val = (char *)(long)var_ref_vals[val_idx]; - len = kern_fetch_store_strlen((unsigned long)str_val); + len = fetch_store_strlen((unsigned long)str_val); fields_size += len; } |