Linker chooses "wrong" main with Boost.Test - program-entry-point

When using Boost.Test, there is generally no need to define a main() function, since Boost.Test provides one itself.
I recently had to convert my project to use static linking of 3rd party libraries (on VS2010). Naturally, I had to link to multiple .libs so that the build succeeds, and my build ran just fine.
However, when I ran my test project, something really strange happened. It seems that one of the 3rd party .libs (libpng), required by one of my dependent libraries, contained a test file with a main() function defined within (pngtest.c, if you must know).
Since my project did not have a main() function, the linker chose that one as my "test" application. Thus, non of my tests run.
Does anyone know how I prevent this from happening? How can I tell the linker/compiler to use the Boost.Test main()?

Answering my own question, and clarifying #Tom's answer.
Turns out that the libpng build script I was using was not the the original shipping with libpng but one created by the OpenCV build system. The file pngtest.c was mistakenly included int the build.
The solution to the problem was to remove pngtest.c from the libpng build script.
The latest OpenCV version, does not include this file anymore.
For more details see my post to Boost mailing list here and my OpenCV bug report here.

Adi, I had the same problem. Looks like you were already all over this one. Thanks to Google and your efforts, I was able to figure it out.
Here's some info to round out the answer:
discussion:
http://boost.2283326.n4.nabble.com/Boost-Test-Linker-chooses-wrong-main-function-td4634872.html
solution:
http://code.opencv.org/issues/2322
Basically, I just excluded the pngtest.c file from the libpng project, and recompiled OpenCV. Looks like it will be fixed in the next release of OpenCV.
Thanks!

Related

Find out why cmake adds specific link flags

I have big project with cmake. It mostly works.
But recently some combination of compilation server vs test server broke. Investigation found that final compile/link command calls gcc (...) -licudata -licui18n -licuuc (...), this introduces dependency on shared library which is not present on test server.
How do I find out what in my project (my library, imported library, found library, whatever) adds those 3 flags to compile command?
I don't add them explicitly, so something is done automagically and I want to find it. compile_commands.json doesn't have them because linking flags don't belong in it. CMakeCache.txt has those flags in some obscure variable PC_LIBXML_STATIC_LIBRARIES:INTERNAL but removing them there doesn't affect compile/link command.
Note that this question is not about dealing with libicu specifically but about a method for investigation in general (though comments about eventual known problems with libicu would be appreciated too).
I found out that dependency graphs created by cmake can have more details that was configured for our project. Here are all options: https://cmake.org/cmake/help/latest/module/CMakeGraphVizOptions.html I expect GRAPHVIZ_EXTERNAL_LIBS, GRAPHVIZ_SHARED_LIBS are most important to set to true.
We enabled everything that was possible to enable, filtered out nothing and resulting graph was massive (to big for xdot - luckily .dot files are human readable), but showed that Boost::regex uses those 3 libraries.

What is the best way to flush precompiled perl6 modules?

I am trying to refactor some code. My approach (using vi) is to copy my old libraries from /lib to /lib2. That way I can hack out big sections, but still have a framework to refactor.
So I go ahead and change mymain.p6 header from use lib '../lib'; to use lib '../lib2';. Then I delete a chunk of the lines in ../lib2/mylibrary.pm6 and make darn sure :w is doing what I expect.
Imagine my surprise when my program still works perfectly despite having been largely deleted. It even works when I rm -R /lib, so nothing back there is persisting.
Is there a chance that I have a precomp of the old lib module lying around? If so, how can I flush it?
This is Rakudo Star version 2019.03.1 built on MoarVM version 2019.03
implementing Perl 6.d.
Precompiled modules are stored in the precomp directory. You can try to rename or delete the ~/.precomp directory.
See also this SO question here.
Update. Well I thought I'd replicated the scenario. It was reliably showing the bug during a one hour period. But now it isn't. Which is pretty disturbing. Investigation continues...
I've replicated #p6steve's scenario in case someone wishes to report this as a bug. At the moment I'm with #p6steve (per comment below) in that I'm going to treat this as a DIHWIDT rather than a reportable bug. That said, now we have a golf'd summary.
Original main program using path1 followed by the module it uses directly and then the one that uses:
use lib 'path1';
use lib1;
say $lib1::value;
unit module lib1;
use lib2;
our $value = $lib2::value;
unit module lib2;
our $value = 1;
This displays 1.
If the libs are copied to a fresh directory, including the .precomp directory, and then the lib2 is edited but the lib1 is not, the change to lib2 is ignored.
Here it is on glot.io before and after copying the libs and their .precomp directory and then editing the libs.
Original answer
Thank you for editing your question. That gives us all more to go on. :)
I'd like to try to get to the bottom of it and hope you're willing to have a go too. This (n)answer and comments below it will record our progress.
From your comment on #ValleLukas' answer:
Then I noticed ../lib2/.precomp directory - so realised library precomps are stored in the library folder. That did the job!
Here's my first guess at what happened:
You copied lib en masse to lib2. This copied the precomp directory with it.
You modified the use lib ... statement in mymain.p6 to refer to lib2.
Your mymain.p6 code includes a use module-that-directly-or-indirectly-uses-mylibrary.
You modify mylibrary.pm6.
But nothing changes! Why not?
You haven't touched module-that-directly-or-indirectly-uses-mylibrary, so Rakudo uses the precompiled version of that module from the lib2/.precomp directory.
Speculating...
Perhaps the fact that that precompiled version exists leads the precompilation logic to presume that if it also finds a precompiled version of a module that's used by module-that-directly-or-indirectly-uses-mylibrary then it can go ahead and use that and not even bother to check how its timestamp compares to the source version.
Does this match your scenario? If not, which bits does it get wrong?

Produce static libs from tensorflow_cc and tensorflow_framework

As far as I understand using bazel I can only produce libtensorflow_cc.so and libtensorflow_framework.so.
I need to produce static libs that are position independent (-fPIC) because I'll link them to a dynamic lib of my own later.
I found this answer which suggest the use of a Makefile included in the project.
I successfully used it to replace the libtensorflow_cc.so but what can I do to replace libtensorflow_framework.so?
Not an actual answer, but too long for a comment.
I managed to do something like what you mention using Bazel on Windows. In particular, I wanted to make a single wrapper DLL with one or two headers (limited in functionality) that I could move around easily. I'll write a summary of the things that I did; it's rather convoluted an customized for our needs, but maybe you find something useful.
I pass --config=monolithic to the bazel build command (besides any other option that you need). That will avoid modularizing the library and thus remove the dependency to a libtensorflow_framework.so (see
tools/bazel.rc).
The goal that I build is not any of the ones in the TensorFlow repository. Instead, I add a very small program that uses my wrapper as a new Bazel target (a C++ file plus my headers headers and a BUILD file). So all of TensorFlow had to be compiled beforehand in order to compile this final dummy program.
When I get that done, I take advantage of the fact that Bazel does already compile every subgoal as a static library. I check a file under the bazel-bin directory generated for my dummy program goal with a name ending .params - there I find the path of all the static libraries that were used to compile it.
I copy all of these intermediate static libraries to somewhere else. Also, I copy a bunch of headers I will need to compile my final wrapper (TensorFlow own's, but also Eigen, Protobuf and Nsync now too). I put all of this in a build area I have prepared before.
I use NMake Makefile to produce my custom DLL, using the static libraries, the copied headers and my own thin wrapper.
And that's about it, I think. I have an ugly Bash script I run on MSYS2 that does everything for me. Usually with every new release I need to tweak one or two things (some option in the configure script, some additional headers I need to copy, etc.), but I do get it to work in the end. It's quite a lot of fiddling though, so I'm not necessarily saying you should use the same approach (but feel free to ask for details about any step if you want).
Using the -2.params files #jdehesa mentioned and bazel verbose output (-s switch), you can even create a link command to eventually statically link these intermediate static libraries. I automated this process for Windows/Linux/macOS and included it to the vcpkg package manager. To use it just run vcpkg install tensorflow:x64-windows-static. If you're interested in the sources, you'll find them here.

Compiling .hx code directly (or indirectly) to a dynamic library (ndll)

I am working on a project and I have a plan to separate certain sections out into separate dlls/ndlls in the final program. The main reason I want to do this is to support plugins and add ons, so more functionality can be added if needed, but the core app can still be used if that's the only requirement.
I have done something similar in C# (abet through an IDE so I never had to write any linker/compiling commands) so I know the general process but I can't seem to find a way to write HX code and then have it compile into a ndll.
I found this http://old.haxe.org/doc/cpp/ffi?lang=en which shows how to compile cpp code into a ndll using hxcpp and g++. I would think there should be a way I can use LIME or HXCPP to create a build file that will allow me to do it in one step instead of having to make a "fake" main function to compile the HX to CPP or CS.
If anyone knows of a project that does this and has a build.hxml or build.xml file that describes this or a tutorial or guide that takes about this, I would love it see it.
Try this:
lime create extension TestExt
lime rebuild TestExt windows
Replace "windows" with "mac" or "linux" as appropriate. Assuming it works, the ndll will show up in a subfolder of TestExt/ndll/.
As for tutorials, I wrote this one. It's targeted at OpenFL programmers, but the "Writing code for iOS" section covers what you'll need to know. (You can also just model your code on the template.)
In case it helps, I've made a tool to generate some of the boilerplate code that Lime requires.

The destination does not support the architecture for which the selected software is built

Today I was creating a shared library in a project containing multiple targets where I first had only one (and no shared lib) when all of a sudden my project produced the following error when trying to run.
"The destination does not support the architecture for which the selected software is built. Switch to a destination that supports that architecture in order to run the selected software."
Do not change the bundle name and the Executable file in info.plist. I changed them and got this error. After I changed them to default, the error's gone.
After going through all the suggested steps here on Stackoverflow to no avail I found the answer to be a very simple one ...
I forgot to include the main.m in the targets so an executable would not be built. Adding the appropriate main files to their targets solved my problem.
The selected destination does not support the architecture,
maybe can help you. I have release the question by the way.
Select Info.plist in your project navigator tree and make sure it is not assigned to a target. I have confirmed this is the correct solution.