What is the best way to group individual autotools projects into another "project"? - packaging

I am going to be creating a "library of libraries" and would like each individual project to be managed using autotoolset. In addition, I would like to be able to build the whole set of libraries at the same time.
Individual libraries:
libyarconveniencezzz
libyarfoo
libyarbar
libyarbaz
I suspect that I might need to just have a top level Makefile written by hand and then have each individual library/convenience library be its own autotoolset package.
I have done something similar to this (four or five years ago) but I have lost my reference code. About the only thing I really remember was taking several months fumbling around in autotools before getting everything setup like I wanted.

I use the following code to build several autotools-managed packages (though they all create "normal" binaries rather than a "library of libraries"):
configure.ac:
AC_INIT(bigpackage, 1.0, bigpackage#email.org)
AM_INIT_AUTOMAKE
AC_CONFIG_FILES(Makefile)
AC_CONFIG_SUBDIRS([package1 package2 package3])
AC_OUTPUT
Makefile.am:
SUBDIRS = package1 package2 package3
Then all this can be set up as usual:
touch NEWS README AUTHORS ChangeLog
autoreconf -i
./configure
make
I wouldn't necessarily call it the "best way", but it works and nicely passes all flags to the subpackages.

Related

Produce static libs from tensorflow_cc and tensorflow_framework

As far as I understand using bazel I can only produce libtensorflow_cc.so and libtensorflow_framework.so.
I need to produce static libs that are position independent (-fPIC) because I'll link them to a dynamic lib of my own later.
I found this answer which suggest the use of a Makefile included in the project.
I successfully used it to replace the libtensorflow_cc.so but what can I do to replace libtensorflow_framework.so?
Not an actual answer, but too long for a comment.
I managed to do something like what you mention using Bazel on Windows. In particular, I wanted to make a single wrapper DLL with one or two headers (limited in functionality) that I could move around easily. I'll write a summary of the things that I did; it's rather convoluted an customized for our needs, but maybe you find something useful.
I pass --config=monolithic to the bazel build command (besides any other option that you need). That will avoid modularizing the library and thus remove the dependency to a libtensorflow_framework.so (see
tools/bazel.rc).
The goal that I build is not any of the ones in the TensorFlow repository. Instead, I add a very small program that uses my wrapper as a new Bazel target (a C++ file plus my headers headers and a BUILD file). So all of TensorFlow had to be compiled beforehand in order to compile this final dummy program.
When I get that done, I take advantage of the fact that Bazel does already compile every subgoal as a static library. I check a file under the bazel-bin directory generated for my dummy program goal with a name ending .params - there I find the path of all the static libraries that were used to compile it.
I copy all of these intermediate static libraries to somewhere else. Also, I copy a bunch of headers I will need to compile my final wrapper (TensorFlow own's, but also Eigen, Protobuf and Nsync now too). I put all of this in a build area I have prepared before.
I use NMake Makefile to produce my custom DLL, using the static libraries, the copied headers and my own thin wrapper.
And that's about it, I think. I have an ugly Bash script I run on MSYS2 that does everything for me. Usually with every new release I need to tweak one or two things (some option in the configure script, some additional headers I need to copy, etc.), but I do get it to work in the end. It's quite a lot of fiddling though, so I'm not necessarily saying you should use the same approach (but feel free to ask for details about any step if you want).
Using the -2.params files #jdehesa mentioned and bazel verbose output (-s switch), you can even create a link command to eventually statically link these intermediate static libraries. I automated this process for Windows/Linux/macOS and included it to the vcpkg package manager. To use it just run vcpkg install tensorflow:x64-windows-static. If you're interested in the sources, you'll find them here.

Config.cmake file for custom made shared libraries

I have this project that I have done for experimentation with Qt and shared libraries. This is basically a couple of Qt Widgets from the tutorials for Qt and what I think is the right CMakeLists configuration so a MylibConfig.cmake is automatically generated from a MylibConfig.cmake.in to share the library. The problem is that I don't want the end user to add the dependencies of my library to its own CMakeLists.txt. This is, in my case, the library depends on Qt4, but I want that the end user to not have to do find_package(Qt 4 REQUIRED). Imagine that I want to provide an enclosed functionality to someone that does not need or want to know about what my library is built on. Is there a way in the automatic generation of the MylibConfig.cmake that it automatically finds all necessary packages or is the only option to add the fin package manually in the MylibConfig.cmake.in?
Thank you very much.
In fact, both mentioned projects do find of dependencies from their *Config.cmake files. And nowadays that is the only option -- CMake can't help you to do it "automatically".
So, some way or another, your config module should do the same.
The easy way is to add find_dependency() calls (cuz you know exactly what other packages, your project is based on).
A little bit harder is to do it "automatically" (writing your own helper function) -- for example by inspecting properties of your target(s), "searching" where all that libraries come from and finally generating find_dependency() calls anyway.

Why is cmake file GLOB evil?

The CMake doc says about the command file GLOB:
We do not recommend using GLOB to collect a list of source files from your source tree. If no CMakeLists.txt file changes when a source is added or removed then the generated build system cannot know when to ask CMake to regenerate.
Several discussion threads in the web second that globbing source files is evil.
However, to make the build system know that a source has been added or removed, it's sufficient to say
touch CMakeLists.txt
Right?
Then that's less effort than editing CMakeLists.txt to insert or delete a source file name. Nor is it more difficult to remember. So I don't see any good reason to advise against file GLOB.
What's wrong with this argument?
The problem is when you're not alone working on a project.
Let's say project has developer A and B.
A adds a new source file x.c. He doesn't changes CMakeLists.txt and commits after he's finished implementing x.c.
Now B does a git pull, and since there have been no modifications to the CMakeLists.txt, CMake isn't run again and B causes linker errors when compiling, because x.c has not been added to its source files list.
2020 Edit: CMake 3.12 introduces the CONFIGURE_DEPENDS argument to file(GLOB which makes globbing scan for new files: https://cmake.org/cmake/help/v3.12/command/file.html#filesystem
This is however not portable (as Visual Studio or Xcode solutions don't support the feature) so please only use that as a first approximation, else other people can have trouble building your CMake files under their IDE of choice!
It's not inherently evil - it has advantanges and disadvantages, covered relatively well in this answer here on StackOverflow. But if you use it carelessly, you could end up ignoring dependency changes and requiring clean rebuilds of large parts of your codebase.
I'm personally in favor of using it - in smaller projects, or on certain subdirectories in larger ones - to avoid having to enter every file manually into the build files. Edit: My preference has changed and I currently tend to avoid it.
On top of the reasons other people here posted, imho the worst issue with glob is that it can yield DIFFERENT file lists on different platforms. As I see it, that's a bug. In OSX glob ignores case sensitivity and in a ubuntu box it doesn't.
Globbing breaks all code inspection in things like CLion that otherwise understand limited subsets of CMakeLists.txt and do not and never will support globbing as it is unsafe.
Write script to dump the globbed list and paste it in, its very simple, and then CLion can actually find the referenced files and infer them as useful. Maybe even put such script into the tree so that the other devs can either run it without being a moron OR set git hooks to make it happen.
In no case should some random file dropped into some directory ever get automatically linked that's how trojans happen.
Also CLion without context jumping to known definitions and what not, is like hiking barefoot /// why bother.

Number of classes in project/workspace

Is there any way to get the number of classes in a project or the complete workspace in Xcode?
A simple way to get a rough idea for a project is by checking the Compile Sources section of the project's Build Phases. The compile sources will list all source files (.m, .swift) and doesn't include any headers.
Assuming roughly one class per source file, this will give you a ballpark idea of how many classes there are in your project at a glance. Note that this doesn't include any embedded projects or frameworks.
You could use cloc which can also be installed via Homebrew: brew install cloc.
Cloc is an open source command line tool for counting lines of code, but it also provides the count of files grouped by file type. The simplest form is cloc <path-to-your-project-dir> but the output can be configured by parameters.
A more complex solution (IMHO too complex) is, using Sonarqube with an Objective C plugin. Sonarqube has a nice interface and many functions, but just for counting classes, it's way to much.

In cmake, what is a "project"?

This question is about the project command and, by extension, what the concept of a project means in cmake. I genuinely don't understand what a project is, and how it differs from a target (which I do understand, I think).
I had a look at the cmake documentation for the project command, and it says that the project command does this:
Set a name, version, and enable languages for the entire project.
It should go without saying that using the word project to define project is less than helpful.
Nowhere on the page does it seem to explain what a project actually is (it goes through some of the things the command does, but doesn't say whether that list is exclusive or not). The cmake.org examples take us through a basic build setup, and while it uses the project keyword it also doesn't explain what it does or means, at least not as far as I can tell.
What is a project? And what does the project command do?
A project logically groups a number of targets (that is, libraries, executables and custom build steps) into a self-contained collection that can be built on its own.
In practice that means, if you have a project command in a CMakeLists.txt, you should be able to run CMake from that file and the generator should produce something that is buildable. In most codebases, you will only have a single project per build.
Note however that you may nest multiple projects. A top-level project may include a subdirectory which is in turn another self-contained project. In this case, the project command introduces additional scoping for certain values. For example, the PROJECT_BINARY_DIR variable will always point to the root binary directory of the current project. Compare this with CMAKE_BINARY_DIR, which always points to the binary directory of the top-level project. Also note that certain generators may generate additional files for projects. For example, the Visual Studio generators will create a .sln solution file for each subproject.
Use sub-projects if your codebase is very complex and you need users to be able to build certain components in isolation. This gives you a very powerful mechanism for structuring the build system. Due to the increased coding and maintenance overhead required to make the several sub-projects truly self-contained, I would advise to only go down that road if you have a real use case for it. Splitting the codebase into different targets should always be the preferred mechanism for structuring the build, while sub-projects should be reserved for those rare cases where you really need to make a subset of targets self-contained.