I am currently checking my unittests with valgrind by using
ctest -T memcheck
I would like to create an XML output by setting
MEMORYCHECK_COMMAND_OPTIONS="--xml=yes --xml-file=valgrind.xml
Now the problem is that i only get the logfile for the last testcase run. All others are overwritten.
What I need, but have not found in the documentation is to add the Testname or the Testnumber to the xml logfile name.
Any ideas how to get them?
Valgrind --log-file or --xml-file arguments can use %p to insert a pid nr or
use %q{FOO} to insert the environment variable FOO value in the file name.
So, if you can put the Testname or Testnumber in an environment variable, %q
is the best. Otherwise, if you do not have too many tests, %p might help. if
pid nr is not recycled too quickly when you run your tests.
See https://www.valgrind.org/docs/manual/manual-core.html#opt.log-file for more info.
Related
I'm using gcov and I'd like to gather coverage results on a per test case basis.
QUESTION
When I execute the googletest executable, is it possible to pass in an argument on command line that says execute only the Nth test case?
I'd even be ok with passing in the name of the fixture+case; I can just use Python with regex to pull out all the tests.
Obviously I can accomplish the same thing by having one test case per .cpp file but that sounds ... stupid.
googletest allows to run a single test case or even the subset of the tests. You can specify a filter string in the GTEST_FILTER environment variable or in the --gtest_filter command line option and googletest will only run the tests whose full names (in the form of TestSuiteName.TestName) match the filter. More information about the format of the filter string can be found in Running a Subset of the Tests section. Also googletest supports --gtest_list_tests command line option to print the list of all test cases. It can be useful in your case.
I'm just switch to zsh and now adapting the alias in which was printing some text (in color) along with a command.
I have been trying to use the $fg array var, but there is a side effect, all the command is printed before being executed.
The same occur if i'm just testing a echo with a color code in the terminal:
echo $fg_bold[blue] "test"
]2;echo "test" test #the test is in the right color
Why the command print itself before to do what it's supposed to do ? (I precise this doesn't happen when just printing whithout any wariable command)
Have I to set a specific option to zsh, use echo with a special parameter to get ride of that?
Execute the command first (keep its output somewhere), and then issue echo. The easiest way I can think of doing that would be:
echo $fg[red] `ls`
Edit: Ok, so your trouble is some trash before the actual output of echo. You have some funny configuration that is causing you trouble.
What to do (other than inspecting your configuration):
start a shell with zsh -f (it will skip any configuration), and then re-try the echo command: autoload colors; colors; echo $fg_bold[red] foo (this should show you that the problem is in your configuration).
Most likely your configuration defines a precmd function that gets executed before every command (which is failing in some way). Try which precmd. If that is not defined, try echo $precmd_functions (precmd_functions is an array of functions that get executed before every command). Knowing which is the code being executed would help you search for it in your configuration (which I assume you just took from someone else).
If I had to guess, I'd say you are using oh-my-zsh without knowing exactly what you turned on (which is an endless source of troubles like this).
I don't replicate your issue, which I think indicates that it's either an option (that I've set), or it's a zsh version issue:
$ echo $fg_bold[red] test
test
Because I can't replicate it, I'm sure there's an option to stop it happening for you. I do not know what that option is (I'm using heavily modified oh-my-zsh, and still haven't finished learning what all the zsh options do or are).
My suggestions:
You could try using print:
$ print $fg_bold[red] test
test
The print builtin has many more options than echo (see man zshbuiltins).
You should also:
Check what version zsh you're using.
Check what options (setopt) are enabled.
Check your ~/.zshrc (and other loaded files) to see what, if any, options and functions are being run.
This question may suggest checking what TERM you're using, but reading your question it sounds like you're only seeing this behaviour (echoing of the command after entry) when you're using aliases...?
I've been working with TCL for some time now, and I have spent a long time trying to do the following (it seems easy and I think it should be, but I can't get it right):
I need to execute an external program by means of a tcl script. For that, I use the exec command. For using this external program, I need to input a variable amount of files. If I called this program straight from a cmd window, it would be something like:
C:\>myprogram -i file1 -i file2 -i file3 (etc., etc.)
However, when trying to implement this in a dynamic/variable way through tcl I get into trouble. The way I do it is by storing in some variable myvar all the "-i filex" I need (done in a loop), and then pass that as a parameter to the exec command. It would look something like:
exec myprogram $myvar
Doing that apparently creates some issues, because this myprogram fails to "see" myvar. I'm guessing that there is some sort of hidden terminator or some clash of different types of arguments that makes that in the end the exec command "sees" only myprogram.
So, my question is, does anyone know how to insert variable arguments into a call to exec?
You can use {*} or eval. See this question for example.
Specializing for your case:
Tcl 8.5 (and later):
exec myprogram {*}$myvar
Tcl 8.4 (and before):
eval [list exec myprogram] [lrange $myvar 0 end]
# Or...
eval [linsert $myvar 0 exec myprogram]
That's right, the old version is ugly (or non-obvious, or both). Because of that, people tended to write this instead:
eval exec myprogram $myvar
but that was slower than expected (OK, not so relevant when running an external program!) and has hazards when $myvar isn't a canonically-formatted list due to the way that eval works. It used to catch out even experienced Tcl programmers, and that's why we introduced new syntax in 8.5, which is specified to be surprise-free and is pretty short too.
I have to do the verification of DPRAM.
Each test case is written in different file named test1.v,test2.v etc.
I want to write a script(unix) such that when I type run test1.v then only that test case will run.
Note :- test1.v contents only task which includes read assert,write assert etc.
The test bench is a separate file which includes clock and component instantiation.
when run test1.v is done then it should link the test1.v task to the testbench and then output is obtained.
I have done the coding in verilog
How to do this?
So, as far as I can make out, your different tests, or 'testcases' are in files named test<n>.v. And I'll assume that each of these testcases has a task that has the same name in all files, say run_testcase. This means that your testbench (testbench.v, say) must look something like:
module testbench();
...
`include "test.v" // <- problem is this line
...
initial begin
// Some setup
run_testcase();
//
$finish;
end
endmodule
So your problem is the include line - a different file needs to be included depending on the testcase. I can think of two ways of solving this first one is as toolic suggested - using a symbolic link to 'rename' the testcase file. So an example wrapper script (run_sim1) to launch your sim might look a bit like:
#! /usr/bin/env sh
testcase=$1
ln -sf ${testcase} test.v
my_simulator testbench.v
Another way is to use a macro, and define this in the wrapper script for your simulation. Your testbench would be modified to look like:
...
`include `TESTCASE
...
And the wrapper script (run_sim2):
#! /usr/bin/env sh
testcase=$1
my_simulator testbench.v +define+TESTCASE=\"${testcase}\"
The quotes are important here, as the verilog include directive expects them. Unfortunately, we can't leave the quotes in the testbench because it will then look like a string to verilog, and the TESTCASE macro won't be expanded.
One way to do it is to have the testbench file include a test file with a generic name:
`include "test.v"
Then, have your script create a symbolic link to the test you want to run. For example, in a shell script or Makefile, to run test1.v:
ln -sf test1.v test.v
run_sim
To run test2.v, your script would substitute test2 for test1, etc.
It seems as if a script with #! prefix can have the interpreter name and ONLY one argument. Thus:
#!/bin/ls -l
works, but
#!/usr/bin/env ls -l
doesn't
Do you agree? Any thoughts?
Francesc
Different Unixes interpret #! differently. Here's a comprehensive-looking writeup: http://www.in-ulm.de/~mascheck/various/shebang/
It seems that the lowest common denominator across platforms is "the interpreter (which must not itself be a script) and no more than one argument".
Originally, we only had one shell on Unix. When you asked to run a command, the shell would attempt to invoke one of the exec() system calls on it. It the command was an executable, the exec would succeed and the command would run. If the exec() failed, the shell would not give up, instead it would try to interpret the command file as if it were a shell script.
Then unix got more shells and the situation became confused. Most folks would write scripts in one shell and type commands in another. And each shell had differing rules for feeding scripts to an interpreter.
This is when the “#! /” trick was invented. The idea was to let the kernel’s exec () system calls succeed with shell scripts. When the kernel tries to exec () a file, it looks at the first 4 bytes which represent an integer called a magic number. This tells the kernel if it should try to run the file or not. So “#! /” was added to magic numbers that the kernel knows and it was extended to actually be able to run shell scripts by itself. But some people could not type “#! /”, they kept leaving the space out. So the kernel was expended a bit again to allow “#!/” to work as a special 3 byte magic number.