Will go test code only referenced in test files be compiled into the binary? - testing

I am wondering what code will be compiled into the go binary if you compile a binary using go build ./... . This will compile a binary that has a cli program. For this cli program, I have test code and non test code. I currently have several flavours of test code:
foo_test.go in package foo_test
foo_internal_test.go in package foo
testutil.go in package testutil that provides test utility functions
No test code is actually referenced in the non test code. The testutil functions are only imported in the test files.
If the test code is infact compiled into the binary , how much of a problem is this?

A go binary only includes code reachable from its main() entry point. For test binaries main() is the test runner.
As to "how much of a problem" it is if it were included... none. It would increase the binary size and compilation time somewhat but otherwise have no impact - code that isn't executed, by definition, does nothing.

I believe that if you have an init() function in an otherwise unreachable file, it will still be linked into the executable.
_test.go files would be still excluded.
This bit us when we had some test helper code that was not in _test files. One had an init() function which ran on the executable startup.

Related

How to write regression tests for a Rust binary crate?

There's a lot of similar posts in the past 1 2 3 4 but they all seem outdated or irrelevant.
My question is: how do you write regression tests for Rust binaries?
I could set them up as "unit tests" in my src/main.rs file, but that's annoying. Ideally, it would be set up as
root
|---src
|---main.rs
|---foo.rs
|---bar.rs
|---tests
|---regress1.rs
|---regress2.rs
Two options:
Split your code into a library and a binary: src/lib.rs and src/main.rs. Then you can write tests/ tests that can load the library part.
This option is best if you specifically want to take advantage of the fact that tests/ tests ("integration tests") are individual binaries on their own (e.g. if the code you want to test uses global variables or system calls that affect global state).
You can write #[test] tests in your binary's code without putting them directly in your src/main.rs file. Just write mod tests; or mod tests { mod regress1; } and put your tests in src/tests/regress1.rs, and in that file write #[test] functions as usual. (Or, if you really want them in a different directory, use the #[path] attribute on mod.)
This option allows faster test execution, because the tests aren't separate binaries and will be run parallel in threads by the Rust test harness.

How to test the main package in Golang from a "test" package?

I have a simple program written in Golang. It's an API. So inside the project folder, there's a folder named cmd containing my main package (used to initialise the app and defines the endpoints for the API). There's also a folder named after my program, containing multiple files from a package also named after my program. This package serves as the model to do all the necessary queries and contains all the types I have defined.
I also created a folder called test. It contains all my test files under the package named test. The problem is that to run the tests, I have to access my main package ! Is there a way to do that in Golang ? I tried simply using import "cmd/main" but of course it doesn't work.
I also had an idea. Perhaps I could move all my initialising functions (in the cmd folder) to the package named after my program. This way I could do a regular import in test. And I create, inside of cmd, a main.go in the main package that serves as the entry point for the compiler.
I'm new to Go so I'm not really confident. Do you think it's the right way ?
Thanks.
EDIT : Apparently some people think this question is a duplicate, but it's not. Here's the explanation I gave in on of the comments :
I read this post before posting, but it didn't answer my question
because in that post the person has his tests in the main package. The
reason why I asked my question is because I don't want to have my
tests in the main package. I'd rather have them all in a test folder
inside the same package.
What you want to do is not not possible in GO (assuming you want to test private functions).
because I don't want to have my tests in the main package. I'd rather
have them all in a test folder inside the same package.
Your code belongs to different package if you move it into different folder.
This is how GO defines packages https://golang.org/doc/code.html#Organization:
Each package consists of one or more Go source files in a single
directory.
This is how your code structured:
main
| -- main.go (package main)
+ -- test
| -- main_test.go (package test)
It is idiomatic to keep tests in the same folder with code. It is normal if language or framework set some rules that developer has to follow. GO is pretty strict about that.
This is how you can organize your code:
main
| -- main.go (package main)
| -- main_test.go (package main_test)
| -- main_private_test.go (package main)
Often it makes sense to test code against its pubic interfaces. The best way to do that that, is to put tests into different package. GO convention is to keep tests in the same folder what leads to using the same package name. There is a workaround for that issue. You can add _test (package main_test) prefix to package name for your tests.
If, that is not possible to test your code using public interfaces, you can add another file with tests and use package main in that file.

Produce static libs from tensorflow_cc and tensorflow_framework

As far as I understand using bazel I can only produce libtensorflow_cc.so and libtensorflow_framework.so.
I need to produce static libs that are position independent (-fPIC) because I'll link them to a dynamic lib of my own later.
I found this answer which suggest the use of a Makefile included in the project.
I successfully used it to replace the libtensorflow_cc.so but what can I do to replace libtensorflow_framework.so?
Not an actual answer, but too long for a comment.
I managed to do something like what you mention using Bazel on Windows. In particular, I wanted to make a single wrapper DLL with one or two headers (limited in functionality) that I could move around easily. I'll write a summary of the things that I did; it's rather convoluted an customized for our needs, but maybe you find something useful.
I pass --config=monolithic to the bazel build command (besides any other option that you need). That will avoid modularizing the library and thus remove the dependency to a libtensorflow_framework.so (see
tools/bazel.rc).
The goal that I build is not any of the ones in the TensorFlow repository. Instead, I add a very small program that uses my wrapper as a new Bazel target (a C++ file plus my headers headers and a BUILD file). So all of TensorFlow had to be compiled beforehand in order to compile this final dummy program.
When I get that done, I take advantage of the fact that Bazel does already compile every subgoal as a static library. I check a file under the bazel-bin directory generated for my dummy program goal with a name ending .params - there I find the path of all the static libraries that were used to compile it.
I copy all of these intermediate static libraries to somewhere else. Also, I copy a bunch of headers I will need to compile my final wrapper (TensorFlow own's, but also Eigen, Protobuf and Nsync now too). I put all of this in a build area I have prepared before.
I use NMake Makefile to produce my custom DLL, using the static libraries, the copied headers and my own thin wrapper.
And that's about it, I think. I have an ugly Bash script I run on MSYS2 that does everything for me. Usually with every new release I need to tweak one or two things (some option in the configure script, some additional headers I need to copy, etc.), but I do get it to work in the end. It's quite a lot of fiddling though, so I'm not necessarily saying you should use the same approach (but feel free to ask for details about any step if you want).
Using the -2.params files #jdehesa mentioned and bazel verbose output (-s switch), you can even create a link command to eventually statically link these intermediate static libraries. I automated this process for Windows/Linux/macOS and included it to the vcpkg package manager. To use it just run vcpkg install tensorflow:x64-windows-static. If you're interested in the sources, you'll find them here.

how to compile perl6 program to generate bytecode?

I am trying to understand perl6 and its changes than perl5. I come to know that perl 6 is compiled languages but I am not getting how? It is not generating any intermediate code (directly executable or jvm bytecode)?
I am not getting any option to do the same. How to do it?
Currently I am able to directly execute my code.
$ perl6-j hello.p6
Hello world
I am following https://github.com/rakudo/rakudo
You can use --target= on the perl6 command line to see a human readable trace of each stage of the compiler. On JVM if you wish to have a "compiled" bytecode output you can use --target=jar and then take a look inside there. But ultimately Perl 6 compiles on the fly unless asked otherwise. It leaves a bytecode representation cached in library path directories of each "CompUnit", so that the compile step is faster next time. This can be seen in .precomp directories. The precomp cache is very tricky to use by hand due to how Perl 6 hashes and indexes all comp units. This is so libraries with the same name but different version and author can sit side by side. On MoarVM there is no equivalent to --target=jar but in the .precomp directory you can see the raw bytecode files that can be directly executed by moar if you link the runtime setting.
Updating the answer for this as this is now supported.
To generate the bytecode for a perl6 program, run perl6 --target=<backend> --output=foo foo.pl6. You can use mbc, jvm, or js as your target backend. The bytecode will be written to the file foo.
Writing bytecode to a file both for modules and programs is not official supported yet. Hence the lack of documentation for --target.

FORTRAN 95: is it possible to share a module without sharing the source code?

I would like to be able to share a FORTRAN 95 module without sharing its source code. Is it possible to do so (maybe by sharing the .MOD file)? In case this is relevant, I use Silverfrost FTN95 compiler on Plato. So far, I only manage to make this work by using the source code of the external module. Here is an example:
file: module_test.f95
module TEST
contains
subroutine 1
code...
end module TEST
file: main_program.f95
include "module_test.f95"
program MAIN_PROGRAM
use TEST
implicit none
code...
end program MAIN_PROGRAM
So, would it be possible for someone to use my module TEST without having my file module_test.f95 nor the line include "module_test.f95" on the main code?
Thanks a lot!
You could provide two things. 1) Compiled object code, possibly in library form. The disadvantage is that this would depend on compiler, OS, perhaps compiler version, and so could be large burden to support. 2) Instead of providing the source code so that others could use the module, you could write equivalent interface descriptions of your routines. This, at least, is at the source code level and would not be compiler dependent. It would some work to write and would have to be maintained if you changed the arguments of any of your procedures.
The solution I am using is, as M. S. B. recommended, to compile the module in library form. I am explicitly showing how I am doing this in case this might be helpful to someone, as this is what I did not know in those days.
First, one needs to compile the module module_test.f95. Using the gfortran compiler, this can be accomplished by the command gfortran -c module_test.f95. This will create two files, module_test.o and module_test.mod. These are the compiled module files that can be shared without sharing the source code.
Now to the main program. For it to make use of the module, one still needs to add the line use TEST but no include <source code>:
program MAIN_PROGRAM
use TEST
implicit none
<...code...>
end program MAIN_PROGRAM
Now when compiling the main program, one must include the location of the .o module file in the command. In the case above, it would be gfortran main_program.f95 module_test.o (supposing that module_test.o is on the same folder as the project). This will compile the main program using the module without the need for its source code.