I use start-stop-daemon to start up programs and would like to use it together with Valgrind.
This is how I use start-stop-daemon:
start-stop-daemon --start --background --exec ${BINPATH}/myPgm -- myPgm
This is how I use Valgrind on a standalone application (junk):
valgrind --tool=memcheck --leak-check=yes ./junk
and that works.
I would want to do something like:
start-stop-daemon --start --background --exec valgrind --tool=memcheck --leak-check=yes --log-file=/usr/magnus/logFile ${BINPATH}/myPgm -- myPgm
It seems start-stop-daemon accepts valgrind (if I only have valgrind without it's flags --tool=memcheck --leak-check=yes --log-file=/usr/magnus/logFile it seem to be accepted)
but start-stop-daemon won't accept it.
I get start-stop-daemon: unrecognized option '--tool=memcheck' for the valgrind flags.
Does anybody know how this can be done?
The "--" in there is used to separate start-stop-daemon's arguments from the ones passed to your executable. So, the myPgm you have after "--" is actually supplied as an argument to your myPgm executable. I think it's extraneous in your first example.
You need to use "--" to split valgrind's args out, like this:
start-stop-daemon --start --background --exec valgrind -- --tool=memcheck --leak-check=yes --log-file=/usr/magnus/logFile ${BINPATH}/myPgm
Related
Is there any way to override the argv[0] specified by valgrind when it execve's the child process?
Why?
I'm using Valgrind with a tool that examines its argv[0] to determine the location of its executable in order to find related executables relative to itself. It exec()s a lot of children, most of which are not of any interest and should not be traced by Valgrind.
I can intercept invocations of the commands of interest by populating a directory with wrapper scripts that call the next executable of the same name on the PATH under the control of valgrind. But valgrind always sets argv[0] to the concrete name of the executable it invoked. I want it to pass the name of the wrapper executable instead, so the child command looks in my wrapper directory for related commands to run.
The usual workaround would be to create a symlink to the real executable from the wrapper dir, then invoke the real executable via the symlink. But that won't work here because that's where the wrapper scripts must exist.
Ugly workaround
So far the only solution I see is to re-exec my wrapper script under valgrind, detect that the wrapper script is running under valgrind, and exec the real target program without wrapping when the script detects it's already running under valgrind. That'll work, but it's ugly. It requires that valgrind --trace-children=yes in order to inspect the actual target, which for my use case is undesirable. And it's expensive to have those short-lived valgrind commands run each wrapper script a second time.
Things I tried
I've tried exec -a /path/to/wrapper/command valgrind /path/to/real/command (bash). But valgrind doesn't seem to notice or care that argv[0] isn't valgrind, and does not pass that on to the child process.
Sample wrapper script with hacky workaround
if [ "${RUNNING_UNDER_VALGRIND:-0}" -eq 0 ]; then
# Find the real executable that's next on the PATH. But don't run it
# yet; instead put its path in the environment so it's available
# when we re-exec ourselves under valgrind.
export NEXT_EXEC="$(type -pafP $mycmd | awk '{ if (NR == 2) { print; exit; } }')"
# Re-exec this wrapper under valgrind's control. Valgrind ignores
# argv[0] so there's no way to directly run NEXT_EXEC under valgrind
# and set its argv[0] to point to our $0.
#
RUNNING_UNDER_VALGRIND=1 exec valgrind --tool=memcheck --trace-children=yes "$0" "$#"
else
# We're under valgrind, so exec the real executable that's next on the
# PATH with an argv[0] that points to this wrapper, so it looks here for
# peer executables when it wants to exec them. We looked up NEXT_EXEC
# in our previous life and put it in the environment.
#
exec -a "$0" "${NEXT_EXEC}" "$#"
fi
Yes that's gross. It'd be possible to make a C executable that did the same thing a bit quicker, but the same issues apply with having to trace children, getting unwanted extra logs, etc.
Edit:
This works, so long as your target program(s) don't care about the executable name itself, only the directory.
NEXT_EXEC="$(type -pafP $mycmd | awk '{ if (NR == 2) { print; exit; } }')"
if ! [ "${0}.real" -ef "${NEXT_EXEC}" ]; then
rm -f "${0}.real"
ln "${NEXT_EXEC}" "${0}.real"
fi
exec valgrind --trace-children=no "${0}.real" "$#"
Edit 2
Beginnings of a valgrind patch to add support for a --with-argv0 argument. When passed, valgrind core will treat the first argument after the executable name as the argv[0] to supply in the target's command line. Normally it puts the executable name there, and treats the client argument list as starting at argv[1].
I use CMake. A custom build step saves error output when it fails.
FIND_PROGRAM (BASH bash HINTS /bin)
SET (BASH_CMD "my-script input.file >output.file 2>error.file")
ADD_CUSTOM_COMMAND (
OUTPUT output.file
COMMAND ${CMAKE_COMMAND} -E env ${BASH} -c "${BASH_CMD}"
...)
This works. If my-script fails for the given input.file then the stderr is saved in error.file, however when I run make and the target fails to build, the normal output does not make the location of error.file obvious. (The actual path of this file is generated in a tangly way.)
I don't want the noisy stderr to show in the terminal during make. I would like to do something like
MESSAGE ("input.file failed, see error.file")
(ideally coloured red or something) to be executed when the command for output.file failed.
Can I express this behaviour in CMakeLists.txt recipes?
Not sure about the highlighting, but you could create a cmake script file executing the command via execute_process, check it's error code and print a custom message in case there's an issue. The following example runs on windows, not on linux, but this should be sufficient for demonstration.
Some command that fails: script.bat
echo "some long message" 1>&2
exit 1
CMake script: execute_script_bat.cmake
execute_process(COMMAND script.bat RESULT_VARIBALE _EXIT_CODE ERROR_FILE error.log)
if (NOT _EXIT_CODE EQUAL 0)
message(FATAL_ERROR "command failed; output see ${CMAKE_SOURCE_DIR}/error.log")
endif()
CMakeLists.txt
add_custom_command(
OUTPUT output.file
COMMAND "${CMAKE_COMMAND}" -P "${CMAKE_CURRENT_SOURCE_DIR}/execute_script_bat.cmake")
Additional info can be passed by adding -D "SOME_VARIABLE=some value" arguments after "${CMAKE_COMMAND}"
I use the following command to run valgrind. But the ./main's output will be mixed with the output of valgrind. I want to keep valgrind's output to stdout. Is there a way to ignore ./main's stdout? Thanks.
valgrind --tool=callgrind --dump-instr=yes --collect-jumps=yes --callgrind-out-file=/dev/stdout ./main
You can use /proc/$$/fd/1 to refer to the original standard output in the calling shell, before the redirection, like this:
valgrind --tool=callgrind --callgrind-out-file=/proc/$$/fd/1 /bin/echo foo > /dev/null
If the system does not support /proc/$$/fd but has /dev/fd (for the current process), this might work (within a script, using bash):
exec {old_stdout}>&1
valgrind --tool=callgrind --callgrind-out-file=/dev/fd/$old_stdout /bin/echo foo > /dev/null
I'm trying to debug a compilation problem, but I cannot seem to get GCC (or maybe it is make??) to show me the actual compiler and linker commands it is executing.
Here is the output I am seeing:
CCLD libvirt_parthelper
libvirt_parthelper-parthelper.o: In function `main':
/root/qemu-build/libvirt-0.9.0/src/storage/parthelper.c:102: undefined reference to `ped_device_get'
/root/qemu-build/libvirt-0.9.0/src/storage/parthelper.c:116: undefined reference to `ped_disk_new'
/root/qemu-build/libvirt-0.9.0/src/storage/parthelper.c:122: undefined reference to `ped_disk_next_partition'
/root/qemu-build/libvirt-0.9.0/src/storage/parthelper.c:172: undefined reference to `ped_disk_next_partition'
/root/qemu-build/libvirt-0.9.0/src/storage/parthelper.c:172: undefined reference to `ped_disk_next_partition'
collect2: ld returned 1 exit status
make[3]: *** [libvirt_parthelper] Error 1
What I want to see should be similar to this:
$ make
gcc -Wall -c -o main.o main.c
gcc -Wall -c -o hello_fn.o hello_fn.c
gcc main.o hello_fn.o -o main
Notice how this example has the complete gcc command displayed. The above example merely shows things like "CCLD libvirt_parthelper". I'm not sure how to control this behavior.
To invoke a dry run:
make -n
This will show what make is attempting to do.
Build system independent method
make SHELL='sh -x'
is another option. Sample Makefile:
a:
#echo a
Output:
+ echo a
a
This sets the special SHELL variable for make, and -x tells sh to print the expanded line before executing it.
One advantage over -n is that is actually runs the commands. I have found that for some projects (e.g. Linux kernel) that -n may stop running much earlier than usual probably because of dependency problems.
One downside of this method is that you have to ensure that the shell that will be used is sh, which is the default one used by Make as they are POSIX, but could be changed with the SHELL make variable.
Doing sh -v would be cool as well, but Dash 0.5.7 (Ubuntu 14.04 sh) ignores for -c commands (which seems to be how make uses it) so it doesn't do anything.
make -p will also interest you, which prints the values of set variables.
CMake generated Makefiles always support VERBOSE=1
As in:
mkdir build
cd build
cmake ..
make VERBOSE=1
Dedicated question at: Using CMake with GNU Make: How can I see the exact commands?
Library makefiles, which are generated by autotools (the ./configure you have to issue) often have a verbose option, so basically, using make VERBOSE=1 or make V=1 should give you the full commands.
But this depends on how the makefile was generated.
The -d option might help, but it will give you an extremely long output.
Since GNU Make version 4.0, the --trace argument is a nice way to tell what and why a makefile do, outputing lines like:
makefile:8: target 'foo.o' does not exist
or
makefile:12: update target 'foo' due to: bar
Use make V=1
Other suggestions here:
make VERBOSE=1 - did not work at least from my trials.
make -n - displays only logical operation, not command line being executed. E.g. CC source.cpp
make --debug=j - works as well, but might also enable multi threaded building, causing extra output.
I like to use:
make --debug=j
https://linux.die.net/man/1/make
--debug[=FLAGS]
Print debugging information in addition to normal processing. If the FLAGS are omitted, then the behavior is the same as if -d was specified. FLAGS may be a for all debugging output (same as using -d), b for basic debugging, v for more verbose basic debugging, i for showing implicit rules, j for details on invocation of commands, and m for debugging while remaking makefiles.
Depending on your automake version, you can also use this:
make AM_DEFAULT_VERBOSITY=1
Reference: AM_DEFAULT_VERBOSITY
Note: I added this answer since V=1 did not work for me.
In case you want to see all commands (including the compiled ones) of the default target run:
make --always-make --dry-run
make -Bn
show commands executed the next run of make:
make --dry-run
make -n
You are free to choose a target other than the default in this example.
Dry runs are a super important functionality of workflow languages. What I am looking at is mostly what would be executed if I run the command and this is exactly what one see when running make -n.
However analogical functionality snakemake -n prints something like
Building DAG of jobs...
rule produce_output:
output: my_output
jobid: 0
wildcards: var=something
Job counts:
count jobs
1 produce_output
1
The log contains kind of everything else than commands that get executed. Is there a way how to get command from snakemake?
snakemake -p --quiet -n
-p for print shell commands
-n for dry run
--quiet for removing the rest
EDIT 2019-Jan
This solution seems broken for lasts versions of snakemake
snakemake -p -n
Avoid the --quiet reported in the #eric-c answer, at least in some situations the combination on -p -n -q does not print the command executed without -n.