Is it possible to merge coverage data from two executables with gcov/gcovr? - g++

On one project, I'm running the test cases on three different executables, compiled with different options. Depending on the options, some code paths are taken or not. Right now, I'm only using the coverage data from one executable.
I'm using gcovr to generate a XML that is then parsed by Sonar:
gcovr -x -b -r . --object-directory=debug/test > coverage_report.xml
I have three sets of gcda and gcno files, but I don't know how to generate a global report of them.
Is there any way to do that ?

Assuming that by "compiled with different options" you mean that you compile such that you obtain different outputs after preprocessing, with the help of lcov (as mentioned by k0n3ru) I was able to do it. Here's the sample code in file sut.c:
#include "sut.h"
#include <limits.h>
int foo(int a) {
#if defined(ADD)
a += 42;
#endif
#if defined(SUB)
a -= 42;
#endif
return a;
}
with sut.h only providing the declaration of foo, and a simple main in test.c, which calls foo and prints the results. Then, with this sequence of commands I was able to create a total.info file with 100% coverage for sut.c:
> g++ --coverage -DADD test.c sut.c -o add.out
> ./add.out
> lcov -c -d . -o add.info # save data from .gdda/.gcno into add.info
> g++ --coverage -DSUB test.c sut.c -o sub.out
> ./sub.out
> lcov -c -d . -o sub.info # save again, this time into sub.info
> lcov -a add.info -a sub.info -o total.info # combine them into total.info
> genhtml total.info
Which then for sut.c shows the following results:
EDIT (Thanks to Gluttton for reminding me of adding this part): Going from the total.info file in lcov format to the Cobertura XML output should then be possible with the help of the "lcov to cobertura XML converter" provided here (although I have not tried that): https://github.com/eriwen/lcov-to-cobertura-xml
The fact that merging of coverage information is possible, however, does certainly not mean that it is a good idea to do so: Coverage, IMO, has only limited informative value regarding the quality of a test suite. Merging coverage results from different preprocessor outputs will even further decrease this value.
This is because the possibilities for developers to learn about scenarios they have not considered will be reduced: By using conditional compilation the control structure and data flow of the code can vary tremendously between preprocessor outputs - coverage information that results from 'overlaying' results from test runs for different preprocessor outputs can make a meaningful interpretation of the results impossible.

Related

How do i set up instruction & data memory address when using "riscv32-unknown-elf-gcc"?

I designed RISCV32IM processor, and I used "riscv32-unknown-elf-gcc" to generate code for testing.
However, the PC(instruction memory address) value and data memory address of the generated code had arbitrary values. I used this command:
riscv32-unknown-elf-gcc -march=rv32im -mabi=ilp32 -nostartfiles test.c
Can I know if I can set the instruction and data memory address I want?
Thanks.
Thank you for answer.
I designed only HW, and this is my first time using the SW tool chain.
Even if my question is rudimentary, please understand.
The figure is the result of the "-v" option.
enter image description here
I can't modify the script file because I use riscv tool chain in DOCKER environment.
So, I tried to copy the script file (elf32lriscv.x), modify it.
I modified it to 0x10000 ==> 0x00000.
The file name of the copied script is "test5.x".
And it was executed as follows.
What am I doing wrong?
enter image description here
The riscv compiler is using the default linker script to place text and date section... .
If you add -v option to your command line riscv32-unknown-elf-gcc -v -march=rv32im -mabi=ilp32 -nostartfiles test.c, you will see the linker script used by collect 2 ( normally it will be -melf32lriscv . you can find the linker script in ${path_to_toolchain}/riscv32-unknown-elf/lib/ldscripts/ (the default one is .x).
You can also use riscv32-unknown-elf-ld --verbose like explained by #Frant. However , you need to be careful if the toolchain was compiled with enable multilib and you compile for rv64 but the default is rv32 or vice versa. It is not the case probably, but to be sure you can specify the arch with -A elf32riscv for an rv32.
To Set the addresses you can create your own linker script or copy and modify the default one. You can only modify the executable start like explained by #Frant or make more modification and place whatever you want where you want.
Once your own linker script ready you can pass it to the linker with -Wl,-T,${own_linker_script }. you command will be riscv32-unknown-elf-gcc -march=rv32im -mabi=ilp32 -nostartfiles test.c -Wl,-T,${own_linker_script }

add_custom_target multiple dependencies provided by a list

set(OUTPUT_PATH "some_path/some_path2/")
set(NAME_XML "external/some_folder/somexml.xml")
set(OUTPUT_DIRECTORY "header1.h" "header2.h" "header3.h")
add_custom_target(
some_target ALL
DEPENDS ${OUTPUT_PATH}header1.h
DEPENDS ${OUTPUT_PATH}header2.h
......
)
foreach(item ${OUTPUT_DIRECTORY})
message(STATUS "testing..." ${item})
add_custom_command(
COMMAND python3 ${OUTPUT_PATH}/main.py -n "1" -p "${OUTPUT_PATH}" -f "${NAME_XML}" -o "${item}"
DEPENDS ${NAME_XML}
OUTPUT ${OUTPUT_PATH}${item}
COMMENT "some comment: ${item}"
)
endforeach(item)
The goal of this work is for python script to be called if the header file is not found (for each case) or has been modified. Similarly, if the XML file has been modified, I want to regenerate all header files by calling the python script.
The python script allows us to pass individual header files to be generated which is why I have this "foreach". As a result, I want to call it only per the requirements in the above paragraph.
How I can modify the code to achieve that and how I can include the OUTPUT_DIRECTORY as a list in the add_custom_target rather than adding DEPENDS in each line individually as per my code example?

can gcc make test if a defined symbol exists in a source file

Is it possible for gcc make to test if a define exists in one of the source files ie.
ifeq (DEFINED_BLAH_BLAH,3)
#echo is equal to 3
else
#echo is not equal to 3
endif
I've looked at this and to expend on the suggestion from the comment, you could do the following. Not elegant, probably not the best available solution, but it works:
echo -ne "#if 1==MYVAL\n0\n#else\n1\n#endif\n" | cpp -P -imacros test.h
Or to call it through gcc or g++:
echo -ne "#if 1==MYVAL\n0\n#else\n1\n#endif\n" | \
gcc -Xpreprocessor -P -E -imacros test.h -
These would return shell style 0 (true for MYVAL defined in test.h and being 1) or 1 on stdout which you could test for in make / shell.
You may also want to strip all blank lines appending | grep -v '^$'.
To elaborate a bit more on the above. I create (echo) as simple file that I run through the preprocessor which check for equality on given macro and results in 0 or 1 being in the output. -P for cpp, because it's not really a C file and we do not need any of the extra bits in the output. -imacros means retain the macros defined in that file, but discard any output generated by processing it.
It should also be noted, if you'd have any conditional definitions needed to consider any defines passed to the compiler, you would need to pass them to this cpp run as well.
Also note whether you test.h being your header file you know should include the macro defintion. Or main.c being a source file including that (and other) headers doesn't really matter / yields the same result (whatver the value was when cpp was done with the file read and (pre)processed for -imacros.

include processed prepocessor directives into `g++ -E' output

I'm having some preprocessing mishap while compiling a 3rd party library with g++.
I can see in -E output that a certain header wrapped with #ifndef SYMBOL is being bypassed. Apparently, that symbol has been defined somewhere else.
But I cannot see where because processed directives are not present in the -E output.
Is there a way to include them (as comments, probably)?
No, there is no standard way to get preprocessed directives as comments.
However, you could use g++ -C -E and the line numbers (output in lines starting with #) and the comments (which are then copied to the preprocessed form).
And you might also use the -H option (to get the included files)
The closest thing I found is the -d<chars> family of options:
-dM dumps all the macros that are defined
-dD shows where they are defined (dumps #define directives)
-dU shows where they are used (in place of #if(n)def, it outputs #define or #undef depending on whether the macro was defined)
Adding I to any of these also dumps #include directives.
The downside is only one of the three can be used at a time and they suppress normal output.
Another, less understandable downside is -dD and -dU do not include predefined macros.

Dynamically generated dependencies

I am trying to generate file that depends on a set of files that can
change throughout different make invocations.
To understand it better, let's show you the code:
cmake_minimum_required(VERSION 2.8)
project(demo-one C)
add_custom_command(
OUTPUT
"${CMAKE_BINARY_DIR}/generated.c"
COMMAND
generate -o "${CMAKE_BINARY_DIR}/generated.c"
DEPENDS
"$(shell generate-dependencies-list)"
COMMENT
"Generating generated.c"
)
add_executable(main main.c "${CMAKE_BINARY_DIR}/generated.c")
So, I want to generate the file generated.c with the generate
command and this files needs to be regenerated when the files
specified by generated-dependencies-list command changes. As you may
notice, generated-dependencies-list can generate different set of
files throughout make invocations, so is not feasible to get
the result of generated-dependencies-list at configure time to then
pass the result to add_custom_command.
Actually the above code somewhat works, but it looks like a hack that
will only work for Makefile backend, also the make rule doesn't look
as what I'm expecting, after all, it's a hack:
generated.c: ../$(shell\ generate-dependencies-list)
Basically, I want this rule or something to get the same result:
generated.c: $(shell generate-dependencies-list)
Has CMake any feature to achieve this?
when the files specified by generated-dependencies-list command changes
If the output of the command generated-dependencies-list depends only on this script and script's parameters then you can just add this script to DEPENDS sub-option:
add_custom_command(
OUTPUT
"${CMAKE_BINARY_DIR}/generated.c"
COMMAND
"${CMAKE_CURRENT_LIST_DIR}/generate-dependencies-list"
COMMAND
generate -o "${CMAKE_BINARY_DIR}/generated.c"
DEPENDS
"${CMAKE_CURRENT_LIST_DIR}/generate-dependencies-list"
COMMENT
"Generating generated.c"
)