Is it possible to ignore the program's stdout while running valgrind? - valgrind

I use the following command to run valgrind. But the ./main's output will be mixed with the output of valgrind. I want to keep valgrind's output to stdout. Is there a way to ignore ./main's stdout? Thanks.
valgrind --tool=callgrind --dump-instr=yes --collect-jumps=yes --callgrind-out-file=/dev/stdout ./main

You can use /proc/$$/fd/1 to refer to the original standard output in the calling shell, before the redirection, like this:
valgrind --tool=callgrind --callgrind-out-file=/proc/$$/fd/1 /bin/echo foo > /dev/null
If the system does not support /proc/$$/fd but has /dev/fd (for the current process), this might work (within a script, using bash):
exec {old_stdout}>&1
valgrind --tool=callgrind --callgrind-out-file=/dev/fd/$old_stdout /bin/echo foo > /dev/null

Related

Setting argv[0] for a valgrind child process?

Is there any way to override the argv[0] specified by valgrind when it execve's the child process?
Why?
I'm using Valgrind with a tool that examines its argv[0] to determine the location of its executable in order to find related executables relative to itself. It exec()s a lot of children, most of which are not of any interest and should not be traced by Valgrind.
I can intercept invocations of the commands of interest by populating a directory with wrapper scripts that call the next executable of the same name on the PATH under the control of valgrind. But valgrind always sets argv[0] to the concrete name of the executable it invoked. I want it to pass the name of the wrapper executable instead, so the child command looks in my wrapper directory for related commands to run.
The usual workaround would be to create a symlink to the real executable from the wrapper dir, then invoke the real executable via the symlink. But that won't work here because that's where the wrapper scripts must exist.
Ugly workaround
So far the only solution I see is to re-exec my wrapper script under valgrind, detect that the wrapper script is running under valgrind, and exec the real target program without wrapping when the script detects it's already running under valgrind. That'll work, but it's ugly. It requires that valgrind --trace-children=yes in order to inspect the actual target, which for my use case is undesirable. And it's expensive to have those short-lived valgrind commands run each wrapper script a second time.
Things I tried
I've tried exec -a /path/to/wrapper/command valgrind /path/to/real/command (bash). But valgrind doesn't seem to notice or care that argv[0] isn't valgrind, and does not pass that on to the child process.
Sample wrapper script with hacky workaround
if [ "${RUNNING_UNDER_VALGRIND:-0}" -eq 0 ]; then
# Find the real executable that's next on the PATH. But don't run it
# yet; instead put its path in the environment so it's available
# when we re-exec ourselves under valgrind.
export NEXT_EXEC="$(type -pafP $mycmd | awk '{ if (NR == 2) { print; exit; } }')"
# Re-exec this wrapper under valgrind's control. Valgrind ignores
# argv[0] so there's no way to directly run NEXT_EXEC under valgrind
# and set its argv[0] to point to our $0.
#
RUNNING_UNDER_VALGRIND=1 exec valgrind --tool=memcheck --trace-children=yes "$0" "$#"
else
# We're under valgrind, so exec the real executable that's next on the
# PATH with an argv[0] that points to this wrapper, so it looks here for
# peer executables when it wants to exec them. We looked up NEXT_EXEC
# in our previous life and put it in the environment.
#
exec -a "$0" "${NEXT_EXEC}" "$#"
fi
Yes that's gross. It'd be possible to make a C executable that did the same thing a bit quicker, but the same issues apply with having to trace children, getting unwanted extra logs, etc.
Edit:
This works, so long as your target program(s) don't care about the executable name itself, only the directory.
NEXT_EXEC="$(type -pafP $mycmd | awk '{ if (NR == 2) { print; exit; } }')"
if ! [ "${0}.real" -ef "${NEXT_EXEC}" ]; then
rm -f "${0}.real"
ln "${NEXT_EXEC}" "${0}.real"
fi
exec valgrind --trace-children=no "${0}.real" "$#"
Edit 2
Beginnings of a valgrind patch to add support for a --with-argv0 argument. When passed, valgrind core will treat the first argument after the executable name as the argv[0] to supply in the target's command line. Normally it puts the executable name there, and treats the client argument list as starting at argv[1].

How to run Valgrind on my program in C?

How do I use Valgrind utility with my simple C program in Linux?
Suppose my executable is a.out. How to check any leaks in my program with Valgrind.
I basically want to know how to use Valgrind.
It is as simple as:
$ valgrind ./a.out
if your a.out is in the current working directory.
In case you have got Valgrind already installed you can learn about the usage running:
$ valgrind --help.
Unfortunately, there is no entry manual entry for Valgrind when running man valgrind.

Redirect stderr through grep -v in LSF batch job

I'm using a library that generates a whole ton of output to stderr (and there is really no way to suppress the output directly in the code; it is ROOT's Minuit2 minimizer which is known for not having a way to suppress the output). I'm running batch jobs through the LSF system, and the error output files are so big that they exceed my disk quota. Erk.
When I run locally on a shell, I do:
python main.py 2> >( grep -v Minuit2 2>&1 )
to suppress the output, as is done here.
This works great, but unfortunately I can't seem to get that or any variation of it to work when running on LSF. I think this is due to LSF not spawning the necessary subshell, but it's not clear.
I run on batch by passing LSF a submit script. The relevant line is:
python main.py $INPUT_FILE
which works great, aside from the aforementioned problem of gigantic error files.
When I try changing that line to
python main.py $INPUT_FILE 2> >( grep -v Minuit2 2>&1 )
I end up with
./singleSubmit.sh: line 16: syntax error near unexpected token `>'
./singleSubmit.sh: line 16: `python $MAIN $1 2> >( grep -v Minuit2 2>&1 )'
in the error log file.
Any idea how I could accomplish what I want, or why this is not working?
Thanks a ton!
The syntax you're using works in bash, not in csh/tcsh. Try changing the first line of your submission script to
#!/bin/bash

GNU Make Error 126, C:\Program is a directory

GNU make gives me a strange error message, which I do not understand.
gao#L8470-130213 ~
$ make
echo Test
C:\Program: C:\Program: is a directory
make: *** [test] Error 126
This is what I thought of verifying:
gao#L8470-130213 ~
$ less makefile
test:
echo Test
gao#L8470-130213 ~
$ which make
/c/Programx86/GnuWin32/bin/make
gao#L8470-130213 ~
$ /c/Progra~2/GnuWin32/bin/make.exe test
echo Test
C:\Program: C:\Program: is a directory
make: *** [test] Error 126
gao#L8470-130213 ~
$ make --version
GNU Make 3.81
Copyright (C) 2006 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE.
This program built for i386-pc-mingw32
It feels like some other program is trying to run at the end, and that its path includes some spaces. In that case, what program could it be, and how can I prevent it from running?
I have seen this thread and tried to disable my antivirus, which did not help.
I have also looked into permissions, but I am not sure if makefile needs execution rights. I can't seem to be able to change that anyway (running in bash on windows. makefile is not read-only when I check in explorer):
gao#L8470-130213 ~
$ ls -l makefile
-rw-r--r-- 1 gao Administ 21 Apr 15 14:53 makefile
gao#L8470-130213 ~
$ chmod +x makefile
gao#L8470-130213 ~
$ ls -l makefile
-rw-r--r-- 1 gao Administ 21 Apr 15 14:53 makefile
What is going on with make, what can I do?
It's not "some other program" that's trying to run, it's the echo command. Make prints the command to be run, echo test, but you never see the output (test) so that means it failed trying to find the echo program. Unfortunately I'm not very familiar with the vagaries of running GNU make on Windows: there are lots of different options. One possibility would be to get a newer version of GNU make; 3.81 is very old. 3.82 is now available and might work better for you.
Good info you added above about your environment re: using bash; that wasn't clear from the original question and on Windows there are many different ways to do things. You're using the mingw version of make; that version (as I understand it) does NOT use bash as the shell to run commands in: it's supposed to be used with native Windows environments which do not, certainly, have bash available. I believe that the version of make you have is invoking commands directly, and/or using command.com. Certainly not a UNIX shell like bash.
If you want to use bash you should set the SHELL make variable to the path of your bash.exe program. If you're using a Cygwin environment you can use the GNU make that comes with Cygwin which behaves more like a traditional make + shell.
Otherwise you'll need to write your commands using Windows command.com statements.
Again, I don't use Windows so this is mostly hearsay.
PS. The makefile does not need to be executable.
What is going on is that make doesn't like file names or directory names with spaces in them, such as Program Files. Neither do most of the utilities that makefiles typically rely on, such as the shell to execute commands with.
I create a junction from Program Files to ProgramFiles and use the latter whenever I encounter cases like this.

Redirect stderr to stdout in C shell

When I run the following command in csh, I got nothing, but it works in bash.
Is there any equivalent in csh which can redirect the standard error to standard out?
somecommand 2>&1
The csh shell has never been known for its extensive ability to manipulate file handles in the redirection process.
You can redirect both standard output and error to a file with:
xxx >& filename
but that's not quite what you were after, redirecting standard error to the current standard output.
However, if your underlying operating system exposes the standard output of a process in the file system (as Linux does with /dev/stdout), you can use that method as follows:
xxx >& /dev/stdout
This will force both standard output and standard error to go to the same place as the current standard output, effectively what you have with the bash redirection, 2>&1.
Just keep in mind this isn't a csh feature. If you run on an operating system that doesn't expose standard output as a file, you can't use this method.
However, there is another method. You can combine the two streams into one if you send it to a pipeline with |&, then all you need to do is find a pipeline component that writes its standard input to its standard output. In case you're unaware of such a thing, that's exactly what cat does if you don't give it any arguments. Hence, you can achieve your ends in this specific case with:
xxx |& cat
Of course, there's also nothing stopping you from running bash (assuming it's on the system somewhere) within a csh script to give you the added capabilities. Then you can use the rich redirections of that shell for the more complex cases where csh may struggle.
Let's explore this in more detail. First, create an executable echo_err that will write a string to stderr:
#include <stdio.h>
int main (int argc, char *argv[]) {
fprintf (stderr, "stderr (%s)\n", (argc > 1) ? argv[1] : "?");
return 0;
}
Then a control script test.csh which will show it in action:
#!/usr/bin/csh
ps -ef ; echo ; echo $$ ; echo
echo 'stdout (csh)'
./echo_err csh
bash -c "( echo 'stdout (bash)' ; ./echo_err bash ) 2>&1"
The echo of the PID and ps are simply so you can ensure it's csh running this script. When you run this script with:
./test.csh >test.out 2>test.err
(the initial redirection is set up by bash before csh starts running the script), and examine the out/err files, you see:
test.out:
UID PID PPID TTY STIME COMMAND
pax 5708 5364 cons0 11:31:14 /usr/bin/ps
pax 5364 7364 cons0 11:31:13 /usr/bin/tcsh
pax 7364 1 cons0 10:44:30 /usr/bin/bash
5364
stdout (csh)
stdout (bash)
stderr (bash)
test.err:
stderr (csh)
You can see there that the test.csh process is running in the C shell, and that calling bash from within there gives you the full bash power of redirection.
The 2>&1 in the bash command quite easily lets you redirect standard error to the current standard output (as desired) without prior knowledge of where standard output is currently going.
I object the above answer and provide my own. csh DOES have this capability and here is how it's done:
xxx |& some_exec # will pipe merged output to your some_exec
or
xxx |& cat > filename
or if you just want it to merge streams (to stdout) and not redirect to a file or some_exec:
xxx |& tee /dev/null
As paxdiablo said you can use >& to redirect both stdout and stderr. However if you want them separated you can use the following:
(command > stdoutfile) >& stderrfile
...as indicated the above will redirect stdout to stdoutfile and stderr to stderrfile.
xxx >& filename
Or do this to see everything on the screen and have it go to your file:
xxx | & tee ./logfile
What about just
xxx >& /dev/stdout
???
I think this is the correct answer for csh.
xxx >/dev/stderr
Note most csh are really tcsh in modern environments:
rmockler> ls -latr /usr/bin/csh
lrwxrwxrwx 1 root root 9 2011-05-03 13:40 /usr/bin/csh -> /bin/tcsh
using a backtick embedded statement to portray this as follows:
echo "`echo 'standard out1'` `echo 'error out1' >/dev/stderr` `echo 'standard out2'`" | tee -a /tmp/test.txt ; cat /tmp/test.txt
if this works for you please bump up to 1. The other suggestions don't work for my csh environment.