GoogleTest tests declare in separate library - testing

I have a lot of auto-generated tests using the GoogleTest framework.
Currently each test is in a .cpp file which is included in a larger "Tests.cpp" file, which is then included in the main file.
When trying to compile all of them my computer freezes.
I've assumed it is because it is trying to compile them in a single output file.
Is there a way to write each test fixture in a "normal" way, having an output file for each test fixture/case and then linking them?
Thanks

The "normal" way to use GoogleTest is to put the tests in a separate project from the project you wish to test, i.e. if you wish to test your project Foo you should place you tests in (e.g.) the FooTest project.
In the FooTest project you main should look something like this:
#include "gtest/gtest.h"
int main(int argc, char** argv)
{
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
And individual test should look like this:
#include "gtest/gtest.h"
#include "IntComparer.h"
namespace
{
class IntComparerTest : public ::testing::Test
{
protected:
IntComparerTest () { ... };
virtual ~IntComparerTest () { ... };
};
TEST_F(IntComparerTest, biggerThanZero)
{
EXPECT_TRUE(IntComparer::inputBiggerThanZero(1));
}
TEST_F(IntComparerTest, biggerThanZero_false)
{
EXPECT_FALSE(IntComparer::inputBiggerThanZero(-1));
}
}
Note that the inclusion of gtest.h and the TEST_F macro cause the test cases to be automatically (if IntComparer.cpp is compiled and linked in the test project) registered by test framework (and thus found/run when the test executable is run) - there is NO need to include the IntComparer.cpp anywhere.
That said, you have not specified your build environment, nor provided any sample code on where you are stuck, so I cannot give you any advice beyond this.

It is very unclear what you are doing. Normally in C++ you should not include cpp file. We need the output you get from the compiler.
One usual way is to have one unit of compilation (one cpp and one header file) for one test fixture and associated test cases.
GoogleTest is nothing more than a C++ library with heavy and complex macros. Usual rules of C++ programming applies.

Related

Code sharing between multiple independently compiled binaries/hex files

I'm looking for documentation/information on how to share information/code between multiple binaries compiled for a Cortex-m/0/4/7 architectures. The two binaries will be on the same chip and same architecture. They are flashed at different locations and sets the main stack pointer and resets the program counter so that one binary "jumps" to the other binary. I want to share code between these two binaries.
I've done a simple copy of an array of function pointers into a section defined in the linker script into RAM. Then read the RAM out in the other binary and cast it to an array then use the index to call functions in the other binary. This does work as a Proof-of-concept, but I think what I'm looking for is a bit more complex. As I want some way of describing compatibility between the two binaries. I want some what the functionality of shared libraries, but I'm unsure if I need position independent code.
As an example how the current copy process is done it is basically:
Source binary:
void copy_func()
{
memncpy(array_of_function_pointers, fixed_size, address_custom_ram_section)
}
Binary which is jumped too from source binary:
array_fp_type get_funcs()
{
memncpy(adress_custom_ram_section, fixed_size, array_of_fp)
return array_of_fp;
}
Then I can use the array_of_fp to call into functions residing in the source binary from the jump binary.
So what I'm looking for is some resources or input for someone who have implemented a similar system. Like I would like to not have to have a custom RAM section where I'm copying the function pointers into.
I would be fine with having the compilation step of source binary outputting something which can be included into the compilation step of the jump binary. However it needs to be reproducible and recompiling the source binary shouldn't break the compatibility with the jump binary(even if it included a different file from what is now outputted) as long as you don't change the interface.
To clarify source binary shouldn't require any specific knowledge about the jump binary. The code should not reside in both binaries as this would defeat the purpose of this mechanism. The overall goal if this mechanism is a way to save space when creating multi-binary applications on cortex-m processors.
Any ideas or links to resources are welcome. If you have any more questions feel free to comment on the question and I'll try to answer it.
Its very hard for me to picture what you want to do, but if you're interested in having an application link against your bootloader/ROM, then see Loading symbol file while linking for a hint on what you could do.
Build your "source"(?) image, scrape its mapfile and make a symbol file, then use that when you link your "jump"(?) image.
This does mean you need to link your "jump" image against a specific version of your "source" image.
If you need them to be semi-version independent (i.e. you define a set of functions that get exported, but you can rebuild on either side), then you need to export function pointers at known locations in your "source" image and link against those function pointers in your "jump" image. You can simplify the bookkeeping by making a structure of function pointers access the functions through that on either side.
For example:
shared_functions.h:
struct FunctionPointerTable
{
void(*function1)(int);
void(*function2)(char);
};
extern struct FunctionPointerTable sharedFunctions;
Source file in "source" image:
void function1Implementation(int a)
{
printf("You sent me an integer: %d\r\n", a);
function2Implementation((char)(a%256))
sharedFunctions.function2((char)(a%256));
}
void function2Implementation(char b)
{
printf("You sent me an char: %c\r\n", b);
}
struct FunctionPointerTable sharedFunctions =
{
function1Implementation,
function2Implementation,
};
Source file in "jump" image:
#include "shared_functions.h"
sharedFunctions.function1(1024);
sharedFunctions.function2(100);
When you compile/link the "source", take its mapfile and extract the location of sharedFunctions and create a symbol file that is linked with the source the "jump" image.
Note: the printfs (or anything directly called by the shared functions) would come from the "source" image (and not the "jump" image).
If you need them to come from the "jump" image (or be overridable) , then you need to access them through the same function pointer table, and the "jump" image needs to fix the function pointer table up with its version of the relevant function. I updated the function1() to show this. The direct call to function2 will always be the "source" version. The shared function call version of it will go through the jump table and call the "source" version unless the "jump" image updates the function table to point to its implementation.
You CAN get away from the structure, but then you need to export the function pointers one by one (not a big problem), but you want to keep them in order and at a fixed location, which means explicitly putting them in the linker descriptor file, etc. etc. I showed the structure method to distill it down to the easiest example.
As you can see, things get pretty hairy, and there is some penalty (calling through the function pointer is slower because you need to load up the address to jump to)
As explained in comment, we could imagine an application and a bootloader relying on same dynamic library. So application and bootloader rely on library, application can be changed without impact on library or boot.
I did not find an easy way to do a shared library with arm-none-eabi-gcc. However
this document gives some alternatives to shared libraries. I your case, I would recommand the jump table solution.
Write a library with the functions that need to be used in bootloader and in applicative.
"library" code
typedef void (*genericFunctionPointer)(void)
// use the linker script to set MySection at a known address
// I think this could be a structure like Russ Schultz solution but struct may or may not compile identically in lib and boot. However yes struct would be much easyer and avoiding many function pointer cast.
const genericFunctionPointer FpointerArray[] __attribute__ ((section ("MySection")))=
{
(genericFunctionPointer)lib_f1,
(genericFunctionPointer)lib_f2,
}
void lib_f1(void)
{
//some code
}
uint8_t lib_f2(uint8_t param)
{
//some code
}
applicative and/or bootloader code
typedef void (*genericFunctionPointer)(void)
// Use the linker script to set MySection at same address as library was compiled
// in linker script also put this section as `NOLOAD` because it is init by library and not by our code
//volatile is needed here because you read in flash memory and compiler may initialyse usage of this array to NULL pointers
volatile const genericFunctionPointer FpointerArray[NB_F] __attribute__ ((section ("MySection")));
enum
{
lib_f1,
lib_f2,
NB_F,
}
int main(void)
{
(correctCastF1)(FpointerArray[lib_f1])();
uint8_t a = (correctCastF2)(FpointerArray[lib_f2])(10);
}
You can look into using linker sections. If you have your bootloader source code in folder bootloader, you can use
SECTIONS
{
.bootloader:
{
build_output/bootloader/*.o(.text)
} >flash_region1
.binary1:
{
build_output/binary1/*.o(.text)
} >flash_region2
.binary2:
{
build_output/binary2/*.o(.text)
} >flash_region3
}

Force g++ to generate code for unused functions

By default, g++ seems to omit code for unused in-class defined methods. Example from my previous question:
struct Foo {
void bar() {}
void baz() {}
};
int main() {
Foo foo;
foo.bar();
}
When compiling this with g++ -g -O0 - c main.cpp, the resulting object file only contains references to bar and not to baz. Adding --no-deafault-inline to the computer flags does not help either. Any ideas how I can force g++ to generate code for baz as well?
Rationale
The test coverage tool gcov reports unused methods as non-executable if they are omitted from the final executable. However, to get meaningful reports I want them to be reported as executable-but-not-executed. For this, I need to find a way to achieve this without having to alter the original source code.
A portable way to do that is to add some "reference" (in the ordinary sense, not only the C++ one, of the word) to these routines.
This could be something as simple as
typedef void (Foo::*funptr_t) (void);
extern "C" const funptr_t tabfun[] = { &Foo::bar, &Foo::baz };
(I'm declaring the array tabfun as extern "C" to be sure that array is emitted, even if not used)
You might try the -fno-inline argument to GCC. You could also customize GCC (e.g. with MELT) to have such an array added automatically (without touching the source code), but this requires some work.

Implement lua scripting through dll calls?

Is it possible to write a program that can execute lua scripts just by using the lua52.dll file?
Or do I have to create a new C project and use all these header and source files?
I just want to create a few global variables and functions and make them available in the lua scripts that should be executed.
So in theory:
LoadDll("lua52.dll")
StartLua()
AddFunctionToLua("MyFunction1")
AddFunctionToLua("MyFunction2")
AddVariableToLua("MyVariable1")
...
ExecuteLuaScript("C:\myScript.lua")
CloseLua()
The standard command line interpreter for Lua is an example of just such a program. On windows, it is a small executable that is linked to lua52.dll. Its source is, of course, part of the Lua distribution.
Despite being located in the same folder as the sources to the Lua DLL, lua.c only references the public API for Lua, and depends only on the four public header files and the DLL itself.
An even simpler example that embeds a Lua interpreter in a C program is the following, derived from the example shown in the PiL book available online:
#include <stdio.h>
#include <string.h>
#include <lua.h>
#include <lauxlib.h>
#include <lualib.h>
int main (void) {
char buff[256];
int error;
lua_State *L = luaL_newstate(); /* create state */
luaL_openlibs(L); /* open standard libraries */
while (fgets(buff, sizeof(buff), stdin) != NULL) {
error = luaL_loadbuffer(L, buff, strlen(buff), "line") ||
lua_pcall(L, 0, 0, 0);
if (error) {
fprintf(stderr, "%s", lua_tostring(L, -1));
lua_pop(L, 1); /* pop error message from the stack */
}
}
lua_close(L);
return 0;
}
In your existing application, you would need to call luaL_newstate() once and store the returned handle. Along with a call to luaL_openlibs(), you would likely want to also define one or more Lua modules representing your application's scriptable API. And of course, you need to call lua_close() sometime before exiting so that Lua has a chance to clean up its objects and in particular a chance to deal with any objects that the script authors are depending on to get resources released when the application exits.
With that in place, you generally provide a way to load script fragments provided by your user using luaL_loadbuffer() or any of several other functions built on top of lua_load(). Loading a script compiles it and leaves an anonymous function on the top of the stack that when called will execute all top-level statements in the script.
For a lot more discussion of this, see the chapters of Programming in Lua (an older addition is available online) that relate to the C API.
LoadDll("lua52.dll")
StartLua()
AddFunctionToLua("MyFunction1")
AddFunctionToLua("MyFunction2")
AddVariableToLua("MyVariable1")
...
ExecuteLuaScript("C:\myScript.lua")
CloseLua()
What language is the above written in? What application is running it? If this is a Lua script, then "AddFunctionToLua" is simply function name() end. If this is C, then you've already got a C project, no need to "create a new C project". So it's unclear what you're asking.

How to specify library dependency introduced by header file

Suppose in a CMake project, I have a source that is built into a library
// a.cpp
void f() { /* some code*/ }
And I have a header
// b.h
void f();
struct X { void g() { f(); } };
I have another file:
// main.cpp
#include "b.h"
int main() { X x; x.g(); }
The CMakeLists.txt contains:
add_library(A a.cpp)
add_executable(main main.cpp)
target_link_libraries(main A)
Now look at the last line of the CMakeLists.txt: I need to specify A as the dependencies of main explicitly. Basically, I need to specify such dependencies for every source that includes b.h. Since the includes can be indirect and go all the way down through a chain of includes. For example, a.cpp calls a class inline function of c.h, which in turns calls function in d.h, etc, and finally calls function from library A. If b.h is included by lots files, manually finding out all such dependencies is not feasible for large projects.
So my question is, is there anyway to specify that, for every source file that directly or indirectly includes a header, it needs to link against certain library?
Thanks.
To make one thing clear: You a.cpp gets compiled into a lib "A". That means that any user of A, will need to specify target_link_libraries with A. No way around it. If you have 10 little applications using A, you will need to specify target_link_libraries ten times.
My answer deals with the second issue of your question and I believe it is the more important one:
How to get rid of a chain of includes?
By including a.h in b.h and using its method in b.h you are adding a "implicit" dependency. As you noticed, any user of b.h needs a.h as well. Broadly speaking, there are two approaches.
The good approach:
This has nothing to do with CMake, but is about encapsulation. The users of your library (incl. you yourselves) should not need to worry about its internal implementation. That means: Don't include b.h in a.h.
Instead, move the include to a .cpp file. This way, you break the chain. E.g. something like
// b.h
void f();
struct X
{
void g();
};
// b.cpp
#include b.h
#include a.h
void X::g( )
{
f();
}
This way, the use of a.h is "contained" in the cpp file and anyone using you library need only include b.h and link to b.lib.
The alternative:
Now, there are situations where you have to accept such a "dependency" or where it is a conscious choice. E.g. when you have no control over A or when you conciously decided to create a library defined in terms of classes/structs internal to A.
In that case, I suggest you write a piece of CMake code, which prepares all the necessary include-dirs down the chain. E.g. define a variable "YOURLIB_INCLUDES" and "YOURLIB_LIBRARIES" in "YourLibConfig.cmake" and document that any user of your library should import "YourLibConfig.cmake". This is the approach several cmake-based projects take. E.g. OpenCV installs a OpenCVConfig.cmake file, VTK installs a VTKConfig.cmake and prepares a UseVTK.cmake file

avoiding duplicate SWIG boilerplate when using many SWIG-generated modules

When generating an interface module with SWIG, the generated C/C++ file contains a ton of static boilerplate functions. So if one wants to modularize the use of SWIG-generated interfaces by using many separately compiled small interfaces in the same application, there ends up being a lot of bloat due to these duplicate functions.
Using gcc's -ffunction-sections option, and the GNU linker's --icf=safe option (-Wl,--icf=safe to the compiler), one can remove some of the duplication, but by no means all of it (I think it won't coalesce anything that has a relocation in it—which many of these functions do).
My question: I'm wondering if there's a way to remove more of this duplicated boilerplate, ideally one that doesn't rely on GNU-specific compiler/linker options.
In particular, is there a SWIG option/flag/something that says "don't include boilerplate in each output file"? There actually is a SWIG option, -external-runtime that tells it to generate a "boilerplate-only" output file, but no apparent way of suppressing the copy included in each normal output file. [I think this sort of thing should be fairly simple to implement in SWIG, so I'm surprised that it doesn't seem to exist... but I can't seem to find anything documented.]
Here's a small example:
Given the interface file swg-oink.swg for module swt_oink:
%module swt_oink
%{ extern int oinker (const char *x); %}
extern int oinker (const char *x);
... and a similar interface swg-barf.swg for swt_barf:
%module swt_barf
%{ extern int barfer (const char *x); %}
extern int barfer (const char *x);
... and a test main file, swt-main.cc:
extern "C"
{
#include "lua.h"
#include "lualib.h"
#include "lauxlib.h"
extern int luaopen_swt_oink (lua_State *);
extern int luaopen_swt_barf (lua_State *);
}
int main ()
{
lua_State *L = lua_open();
luaopen_swt_oink (L);
luaopen_swt_barf (L);
}
int oinker (const char *) { return 7; }
int barfer (const char *) { return 2; }
and compiling them like:
swig -lua -c++ swt-oink.swg
g++ -c -I/usr/include/lua5.1 swt-oink_wrap.cxx
swig -lua -c++ swt-barf.swg
g++ -c -I/usr/include/lua5.1 swt-barf_wrap.cxx
g++ -c -I/usr/include/lua5.1 swt-main.cc
g++ -o swt swt-main.o swt-oink_wrap.o swt-barf_wrap.o
then the size of each xxx_wrap.o file is about 16KB, of which 95% is boilerplate, and the size of the final executable is roughly the sum of these, about 39K. If one compiles each interface file with -ffunction-sections, and links with -Wl,--icf=safe, the size of the final executable is 34KB, but there's still clearly a lot of duplication (using nm on the executable one can see tons of functions defined multiple times, and looking at their source, it's clear that it would be fine to use a single global definition for most of them).
I'm fairly sure SWIG doesn't have an option for doing this. I'm speculating now, but I think the reason might well be concern about visibility of this for modules built with different versions of SWIG. Imagine the following scenario:
Two libraries X and Y both provide an interface to their code using SWIG. They both opt to make the "SWIG glue" stuff visible across different translation units in order to reduce code size. This will all be well and good if both X and Y are using the same version of SWIG. What happens though if X uses SWIG 1.1 and Y uses SWIG 1.3? Both modules work fine on their own, but depending on how the platform treats shared objects and how the language itself loads them (RTLD_GLOBAL?) some potentially very bad things would happen from the combination of the two modules being used in the same VM.
The penalty of the code duplication is pretty low I suspect - the cost of swapping between VM and native code is typically quite high, which probably dwarfs the slightly reduced instruction cache hits, although it might be interesting to see real benchmarks. On the up side this is code no users ever need to worry about it, since it's all auto generated and all correctly kept with interfaces written for the corresponding version.
I might be a bit late, but here is a workaround:
In SWIG (<= 1.3 ) there is -noruntime command-line option
Since SWIG 2.0 -noruntime was deprecated, so now one should pass -DSWIG_NOINCLUDE to the C preprocessor - not to the swig itself
I am completely not sure that this is correct, but it at least works for me. I am going to clarify this question in the SWIG's mailing list.