I have a project where сalculations are run via a files. I need to measure the execution time of functions that are called after running a certain file. All the functions of the code are located in a large number of other files. Below is the perf probe - method I use to measure the time of a single function. This method is easy to use if you need to measure the time for a small number of functions, but in my case there are a lot of functions in different binares and files and it takes a long time to create probe events for each function.
Create a probe events, for entry and exit points of function:
sudo perf probe -x ./binary_file_name --add func_name --add func_name%return
Starting profiling with created events:
sudo perf record -e probe_binary_file_name:func_name -e probe_binary_file_name:func_name ./a.out
As a result, I can find out the number of calls of func_name and how long it worked for each call. Is there any way to create probe events for large number of functions without writing out all their names?
There are a few ways to do this:
perl/python] script that parses the.cfiles and extracts the function names.readelf -son the.oand/ora.outfiles and extract the names that are markedFUNC.For the first approach, you could first run the source file through an indenter program (e.g. GNU
indent). It can force the function's return type onto a separate line and the function name to column 1. This makes the parsing quite easy: any line that begins with the regex^([a-zA-Z][a-zA-Z0-9]*)[(]will be a function name.This script can be run as part of the
makeand output the names (in any format you wish) to an output file. This file could also contain the exactperfarguments(s) you want.