How do I set a color for a function in a callgraph? - documentation

A doxygen callgraph usually colors a single function as gray and all other functions in the call-tree as white. Unfortunately, I need to color several functions in special color for my use case. For example, the following main.c
/**
* #file main.c
*/
/// #brief Foos around
void foo() {}
/// #brief Bars around
void bar() { foo(); }
/// #brief Quux around
void quux() {}
/// #brief System entry point
int main() {
foo();
bar();
quux();
}
together with the following Doxygen configuration
EXTRACT_ALL = YES
CALL_GRAPH = YES
generates the following tree:
While this is fine in general, some internal rules need bar to be always colored orange, as it is a "unsafe" function. Similarly, I have to color foo as teal to signalize that it is safe. Something I envision is
Is it possible to set the color in Doxygen's generated graphs via Doxygen commands? E.g. some magic command like the non-existing #dotnodecolor green? Or do I need to post-process all graphs by hand?

In doxygen there is no way to set the node color, so unfortunately you will have to post process the relevant graphs yourself.
There might be a way around this by redefining the dot command (it is a bit mean)
read the doxygen dot file
manipulate the required field
write the file back
run the real dot command.
By means of doxygen -d extcmd you can see the used arguments in the call to e.g. dot.
For your case te output of the later would be (DOT_IMAGE_FORMAT = png):
Executing external command `dot ".../html/main_8c_a49a4b11e50430aa0a78de989ea99e082_cgraph.dot" -Tpng -o ".../html/main_8c_a49a4b11e50430aa0a78de989ea99e082_cgraph.png"`
Executing external command `dot ".../html/graph_legend.dot" -Tpng -o "D:/speeltuin/_stack/quest_color_dot/html/graph_legend.png"`
Executing external command `dot ".../html/main_8c_ae66f6b31b5ad750f1fe042a706a4e3d4_cgraph.dot" -Tpng -o ".../html/main_8c_ae66f6b31b5ad750f1fe042a706a4e3d4_cgraph.png"`
Executing external command `dot ".../html/main_8c_ae66f6b31b5ad750f1fe042a706a4e3d4_cgraph.dot" -Tcmapx -o ".../html/main_8c_ae66f6b31b5ad750f1fe042a706a4e3d4_cgraph.map"`
Executing external command `dot ".../html/main_8c_a49a4b11e50430aa0a78de989ea99e082_cgraph.dot" -Tcmapx -o ".../html/main_8c_a49a4b11e50430aa0a78de989ea99e082_cgraph.map"`
and in case: DOT_IMAGE_FORMAT = svg
Executing external command `dot ".../html/main_8c_a49a4b11e50430aa0a78de989ea99e082_cgraph.dot" -Tsvg -o ".../html/main_8c_a49a4b11e50430aa0a78de989ea99e082_cgraph.svg"`
Executing external command `dot ".../html/graph_legend.dot" -Tsvg -o ".../html/graph_legend.svg"`
Executing external command `dot ".../html/main_8c_ae66f6b31b5ad750f1fe042a706a4e3d4_cgraph.dot" -Tsvg -o ".../html/main_8c_ae66f6b31b5ad750f1fe042a706a4e3d4_cgraph.svg"`
Executing external command `dot ".../html/main_8c_a49a4b11e50430aa0a78de989ea99e082_cgraph.dot" -Tcmapx -o ".../html/main_8c_a49a4b11e50430aa0a78de989ea99e082_cgraph.map"`
Executing external command `dot ".../html/main_8c_ae66f6b31b5ad750f1fe042a706a4e3d4_cgraph.dot" -Tcmapx -o ".../html/main_8c_ae66f6b31b5ad750f1fe042a706a4e3d4_cgraph.map"`

Related

emacs verilog-mode local variables not parsed

I'm trying to set some Verilog-mode local variables in the SystemVerilog file itself such as:
// Local Variables:
// verilog-library-flags:("-y ../../../ip_lib/")
// verilog-typedef-regexp: ".*_t$"
// verilog-auto-reg-input-assigned-ignore-regexp: ".*")
// End:
And then I call emacs in command line to generate the code:
emacs --batch ./test.sv -f verilog-batch-auto
But that tells me it cannot find module that is supposed to be in ../../../ip_lib/
But then if I use:
emacs -q --eval='(progn (setq-default verilog-library-flags "-y ../../../ip_lib") (setq-default verilog-typedef-regexp ".*_t$"))' --batch ./test.sv -f verilog-batch-auto
it works. What is the issue ?
I don't use verilog, but glancing at your examples I can see that
(setq-default verilog-library-flags "-y ../../../ip_lib")
and
// verilog-library-flags:("-y ../../../ip_lib/")
are setting different types. The former is a string value, while the latter is a list value (containing a single item, being a string).
So that's presumably the issue.

cmake, pass result of external program as preprocessor definitions

I'm new to cmake, so correct me if I've messed things up and this should be solved using something other than cmake.
I have main_program, that requires multiple other subprograms in form of bindata to be specified at build phase. Right now I build it by running
cmake -DBINDATA1="\xde\xad..." -DBINDATA2="\xbe\xef" -DBINDATA3="..."
and in code I use them as:
// main_program.cpp
int main() {
#ifdef BINDATA1
perform_action1(BINDATA1);
#endif
#ifdef BINDATA2
perform_action2(BINDATA2);
#endif
[...]
This is rather unclean method as any time I'm changing one of subprograms I have to generate bindata from it and pass it to cmake command.
What I would like to do, is have a project structure:
/
-> main_program
-> subprograms
-> subprogram1
-> subprogram2
-> subprogram3
and when I run cmake, I would like to
compile each of subprograms
generate shellcode from each of them, by running generate_bindata program on them
build main_program passing bindatas from step 2
and when I run cmake, I would like to
compile each of subprograms
generate shellcode from each of them, by running generate_shellcode program on them
build main_program passing shellcodes from step 2
Then let's do that. Let's first write a short script to generate a header:
#!/bin/sh
# ./custom_script.sh
# TODO: Find out proper quoting and add `"` is necessarily. Ie. details.
# Prefer to use actual real variables like `static const char *shellcode[3]`
# instead of raw macro defines.
cat > "$1" <<EOF
#define SHELLCODE1 $(cat "$2")
#define SHELLCODE2 $(cat "$3")
#define SHELLCODE3 $(cat "$4")
EOF
To be portable, write this script in cmake. This script will be run at build phase to generate the header needed for compilation. Then, "model dependencies" - find out what depends on what exactly. Then write it in cmake:
add_executable(subprogram1 sources.c...)
add_executable(subprogram2 sources.c...)
add_executable(subprogram3 sources.c...)
for(i IN ITEMS 1 2 3)
add_custom_target(
COMMENT Generate shellcode${i}.txt with the content of shellcode
# TODO: redirection in COMMAND should be removed, or the command
# should be wrapped in `sh -c ...`.
COMMAND $<TARGET_FILE:subprogram${i}> | generate_shellcode > ${CMAKE_CURRENT_BINARY_DIR}/shellcode${i}.txt
OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/shellcode${i}.txt
DEPENDS $<TARGET_FILE:subprogram${i}> generate_shellcode
)
endfor()
add_custom_command(
COMMENT Generate shellcodes.h from shellcode1.txt shellcode2.txt and shellcode3.txt
COMMAND sh custom_script.sh
${CMAKE_CURRENT_BINARY_DIR}/shellcodes.h
${CMAKE_CURRENT_BINARY_DIR}/shellcode1.txt
${CMAKE_CURRENT_BINARY_DIR}/shellcode2.txt
${CMAKE_CURRENT_BINARY_DIR}/shellcode3.txt
OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/shellcodes.h
DEPENDS
${CMAKE_CURRENT_BINARY_DIR}/shellcode1.txt
${CMAKE_CURRENT_BINARY_DIR}/shellcode2.txt
${CMAKE_CURRENT_BINARY_DIR}/shellcode3.txt
)
# Then compile the final executable
add_executable(main main.c ${CMAKE_CURRENT_BINARY_DIR}/shellcodes.h)
# Don't forget to add includes!
target_include_directories(main PUBLIC ${CMAKE_CURRENT_BINARY_DIR})
# or you may add dependency to a single file instead of target
# Like below only to a single shellcodeswrapper.c file only
# This should help build parallelization.
set_source_files_properties(main.c OBJECT_DEPENDS ${CMAKE_CURRENT_BINARY_DIR}/shellcodes.h)
# Or you may add a target for shelcodes header file and depend on it
add_custom_target(shellcodes DEPENDS ${CMAKE_CURRENT_BINARY_DIR}/shellcodes.h)
add_executable(main main.c)
target_include_directories(main PUBLIC ${CMAKE_CURRENT_BINARY_DIR})
add_dependencies(main shellcodes)
Then your main file:
#include <shellcodes.h> // compiler will find it in BINARY_DIR
int main() {
perform_action1(SHELLCODE1);
perform_action2(SHELLCODE2);
}
So that all your source files are not recompiled each time, I suggest to write a wrapper:
// shellcodewrapper.c
#include <shellcodes.h>
// preserve memory by not duplicating code in each TU
static const char shellcode1[] = SHELLCODE1;
// only this file will be recompiled when SHELLCODE changes
const char *get_shellcode1(void) {
return shellcode1;
}
// shellcodewrapper.h
const char *get_shellcode1(void);
// main.c
#include <shellcodewrapper.h>
int main() {
perform_action1(get_shellcode1());
perform_action2(get_shellcode2());
}
That way when you change the "SHELLCODE" generators, only shellcodewrapper.c will be compiled, resulting in super fast compilation times.
Note how dependency is transferred and how it works - I used files inside BINARY_DIR to transfer result from one command to another, then these files track what was changed and transfer dependency below in the chain. Track dependencies in DEPENDS and OUTPUT in add_custom_command and cmake will properly compile in proper order.

CMake: How can I compile defines and flags as string constants into my C(++) program?

I'd like my C or C++ program that is built via CMake to be able to print (or otherwise make use of) the macro definitions and (other) C/C++ flags it was compiled with. So I want CMake to generate/configure a header or source file that defines respective strings constants and that is then built as part of/into my program.
CMake features several commands (like file() or execute_process()) that would be executed when (respectively before) the build system is generated and thus would allow me to write such a source file, but I'm having trouble with getting the effective macro definitions and flags used for my target. E.g. there seem to be COMPILE_DEFINITIONS for the directory, the target, and for the configuration. Is there a way to get the macro definitions/C(++) flags that are effectively used for building my target? And how do I best write them into a source file?
I've noticed, when using the Makefiles generator apparently a file "${CMAKE_CURRENT_BINARY_DIR}/CMakeFiles/MyTarget.dir/flags.make" is created, which seems to contain pretty much what I'm looking for. So if there's no other way, I can probably make use of that file, but obviously that won't work for other generators and it comes with its own challenges (the file is generated after execute_process()).
The approach I finally went with sets the CXX_COMPILER_LAUNCHER property to use a compiler wrapper script that injects the actual compiler command line into a source file. Since I have multiple libraries/executables to which I want to add the respective information, I use a CMake function that adds a source file containing the info to the target.
function(create_module_build_info _target _module _module_include_dir)
# generate BuildInfo.h and BuildInfo.cpp
set (BUILD_MODULE ${_module})
set (BUILD_MODULE_INCLUDE_DIR ${_module_include_dir})
configure_file(${CMAKE_SOURCE_DIR}/BuildInfo.h.in
${CMAKE_BINARY_DIR}/include/${_module_include_dir}/BuildInfo.h
#ONLY)
configure_file(${CMAKE_SOURCE_DIR}/BuildInfo.cpp.in
${CMAKE_CURRENT_BINARY_DIR}/BuildInfo.cpp
#ONLY)
# Set our wrapper script as a compiler launcher for the target. For
# BuildInfo.cpp we want to inject the build info.
get_property(_launcher TARGET ${_target} PROPERTY CXX_COMPILER_LAUNCHER)
set_property(TARGET ${_target} PROPERTY CXX_COMPILER_LAUNCHER
${CMAKE_SOURCE_DIR}/build_info_compiler_wrapper.sh ${_launcher})
get_property(_compile_flags SOURCE BuildInfo.cpp PROPERTY COMPILE_FLAGS)
set_property(SOURCE BuildInfo.cpp PROPERTY COMPILE_FLAGS
"${_compile_flags} -D_BUILD_INFO=${CMAKE_CURRENT_BINARY_DIR}/BuildInfo_generated.cpp,${_module}")
# add BuildInfo.cpp to target
target_sources(${_target} PRIVATE BuildInfo.cpp)
endfunction()
The function can simply be called after defining the target. Parameters are the target, a name that is used as a prefix of the constant name to be generated, and a name that is part of the path of the header file to be generated. The compiler flag -D_BUILD_INFO=... is only added to the generated source file and it will be used by the wrapper script as an indicator that the constant definition should be added to that source file. All other compiler lines are just invoked as is by the script.
The template source file "BuildInfo.cpp.in":
#include "#BUILD_MODULE_INCLUDE_DIR#/BuildInfo.h"
The template header file "BuildInfo.h.in":
#pragma once
#include <string>
extern const std::string #BUILD_MODULE#_COMPILER_COMMAND_LINE;
The compiler wrapper script "build_info_compiler_wrapper.sh":
#!/usr/bin/env bash
set -e
function createBuildInfoTempFile()
{
local source="$1"
local target="$2"
local prefix="$3"
local commandLine="$4"
cp "$source" "$target"
cat >> "$target" <<EOF
const std::string ${prefix}_COMPILER_COMMAND_LINE = "$commandLine";
EOF
}
# Process script arguments. We copy them to array variable args. If we find an
# argument "-D_BUILD_INFO=*", we remove it and will inject the build info
# variable definition into (a copy of) the input file.
generateBuildInfo=false
buildInfoTempFile=
buildInfoVariablePrefix=
args=()
while [ $# -ge 1 ]; do
case "$1" in
-D_BUILD_INFO=*)
if [[ ! "$1" =~ -D_BUILD_INFO=([^,]+),(.+) ]]; then
echo "error: failed to get arguments for build info generation" >&2
exit 1
fi
generateBuildInfo=true
buildInfoTempFile="${BASH_REMATCH[1]}"
buildInfoVariablePrefix="${BASH_REMATCH[2]}"
shift
continue
;;
esac
args+=("$1")
shift
done
if $generateBuildInfo; then
# We expect the last argument to be the source file. Check!
case "${args[-1]}" in
*.c|*.cxx|*.cpp|*.cc)
createBuildInfoTempFile "${args[-1]}" "$buildInfoTempFile" "$buildInfoVariablePrefix" "${args[*]}"
args[-1]="$buildInfoTempFile"
;;
*)
echo "error: Failed to find source file in compiler arguments for build info generation feature." >&2
exit 1
;;
esac
fi
"${args[#]}"
Obviously the script can be made smarter. E.g. instead of assuming it is the last argument it could find the actual index of the input source file. It could also process the command line to separate preprocessor definitions, include paths, and other flags.
Note that "-D_BUILD_INFO=..." argument is used instead of some parameter that the compiler wouldn't know (e.g. "--generate-build-info"), so that IDEs won't run into issues when passing the arguments directly to the compiler for whatever purposes.

Wine spec files

I have a Windows DLL called morag.dll containing functions foo and bar. I also have a Linux SO called morag.so containing the Linux implementations of foo and bar (same parameters on each platform). I have a Windows application that loads morag.dll that I want to run under wine. The application itself runs fine, however I need to create the mapping between foo and bar which are expected by my application to be found in morag.dll to instead use foo and bar in morag.so.
To do this I know I need to create morag.dll.spec file and winebuild it into morag.dll.so.
Following instructions here I created a wrapper in morag.c containing functions Proxyfoo and Proxybar which do nothing more than call real functions foo and bar. Then I created morag.dll.spec thus:-
1 stdcall foo (long ptr) Proxyfoo
2 stdcall bar (ptr ptr) Proxybar
I compiled my c part, winebuild the spec file, and then use winegcc to link them into morag.dll.so
Then I read this page which suggested you maybe didn't need the proxy function so I tried without the c part altogether and made a spec file thus:-
1 stdcall foo (long ptr)
2 stdcall bar (ptr ptr)
And as above, did the winebuild step and the winegcc link step.
In both cases these were the options I used.
winebuild --dll -m32 -E ./morag.dll.spec -o morag.dll.o
ldopts= -m32 -fPIC -shared -L/usr/lib/wine -L/opt/morag/lib -lmorag
winegcc $(ldopts) -z muldefs -o morag.dll.so [morag.o] morag.dll.o
N.B. [..] denotes I only used this in the case where I was also building the c part.
in both cases, when my application running under wine tries to load the entry point in the DLL using GetProcAddress it fails.
I ran wine with WINEDEBUG=+module,+relay and saw the attempt and failure recorded as follows:-
0025:Ret KERNEL32.LoadLibraryExA() retval=7dbc0000 ret=00447b84
0025:Call KERNEL32.GetProcAddress(7dbc0000,00b2d060 "foo") ret=00447c8a
0025:Ret KERNEL32.GetProcAddress() retval=00000000 ret=00447c8a
It seems it has found and loaded my morag.dll.so since LoadLibraryExA has returned the handle to it, but when it tries to find function foo within that HMODULE handle it fails.
If I issue:-
nm -D morag.dll.so
I see foo and bar shown as U in both cases. In the case where there are proxy functions as well, the proxy functions are shown as T.
I assume that this is because I have not built the morag.dll.so file correctly, either with the wrong options, or that my spec file is not correctly formed. I am not sure which of the two schemes described above I should be using.
All help most appreciated.
I ran into the same problem today.
What was missing in my case was the proper exporting rule for e.g. foo and bar in the built-in DLL. Conveniently, besides the --dll object, the winebuild tool can create a .def file for us, e.g.:
morag.def: morag.spec
$(WINEBUILD) --def -E $< -o $#
The resulting .def file must be linked into morag.dll.so along with other objects. This does the job.

Is it possible to merge coverage data from two executables with gcov/gcovr?

On one project, I'm running the test cases on three different executables, compiled with different options. Depending on the options, some code paths are taken or not. Right now, I'm only using the coverage data from one executable.
I'm using gcovr to generate a XML that is then parsed by Sonar:
gcovr -x -b -r . --object-directory=debug/test > coverage_report.xml
I have three sets of gcda and gcno files, but I don't know how to generate a global report of them.
Is there any way to do that ?
Assuming that by "compiled with different options" you mean that you compile such that you obtain different outputs after preprocessing, with the help of lcov (as mentioned by k0n3ru) I was able to do it. Here's the sample code in file sut.c:
#include "sut.h"
#include <limits.h>
int foo(int a) {
#if defined(ADD)
a += 42;
#endif
#if defined(SUB)
a -= 42;
#endif
return a;
}
with sut.h only providing the declaration of foo, and a simple main in test.c, which calls foo and prints the results. Then, with this sequence of commands I was able to create a total.info file with 100% coverage for sut.c:
> g++ --coverage -DADD test.c sut.c -o add.out
> ./add.out
> lcov -c -d . -o add.info # save data from .gdda/.gcno into add.info
> g++ --coverage -DSUB test.c sut.c -o sub.out
> ./sub.out
> lcov -c -d . -o sub.info # save again, this time into sub.info
> lcov -a add.info -a sub.info -o total.info # combine them into total.info
> genhtml total.info
Which then for sut.c shows the following results:
EDIT (Thanks to Gluttton for reminding me of adding this part): Going from the total.info file in lcov format to the Cobertura XML output should then be possible with the help of the "lcov to cobertura XML converter" provided here (although I have not tried that): https://github.com/eriwen/lcov-to-cobertura-xml
The fact that merging of coverage information is possible, however, does certainly not mean that it is a good idea to do so: Coverage, IMO, has only limited informative value regarding the quality of a test suite. Merging coverage results from different preprocessor outputs will even further decrease this value.
This is because the possibilities for developers to learn about scenarios they have not considered will be reduced: By using conditional compilation the control structure and data flow of the code can vary tremendously between preprocessor outputs - coverage information that results from 'overlaying' results from test runs for different preprocessor outputs can make a meaningful interpretation of the results impossible.