Optional manuals? Or pre-compiled in distribution? - packaging

I'm working on a personal project written in C++, and I'm using GNU Autotools as build system.
I would like to distribute my software together with manual pages, but I'm not very fond of Groff. For this reason I decided to write everything in Asciidoc and to compile it in Groff with a2x.
While I'm quite satisfied with the result, I noticed that installing Asciidoc might require a lot of disk space. For instance asciidoc-base in Debian Stretch requires 1928 MB of dependencies! (Edit: not even true. I forgot to disable suggested/recommended, but the use case is relevant anyway).
One solution would be to make it optional. To achieve this my configure.ac contains the following lines:
AC_CHECK_PROG([asciidoc], [a2x], [a2x], [false])
AM_CONDITIONAL([ASCIIDOC_AVAIL], [test x$asciidoc != xfalse])
…and the man/Makefile.am file is defined as follows:
if ASCIIDOC_AVAIL
man1_MANS = foo.1
man5_MANS = foo.conf.5
foo.1: foo.1.txt
$(asciidoc) --doctype manpage --format manpage ./$<
foo.conf.5: foo.conf.5.txt
$(asciidoc) --doctype manpage --format manpage ./$<
clean:
rm $(man1_MANS) $(man5_MANS)
endif
Even though this seems to work, I'm not very happy with it. I don't like the idea of not providing a manual.
Would it be advisable to pre-compile the man pages as part of the make dist step? In the same way as the distribution foo-x.y.z.tar.gz contains the configure script (which is not checked in the VCS but generated by autoreconf), I could make foo.1 and foo.conf.5 pre-compiled, and distributed with the source tarball.
Provided that this is acceptable from the "best practices" standpoint, how can I achieve it? I tried to declare them as EXTRA_DIST (EXTRA_DIST = man1_MANS man5_MANS) but I didn't have much luck.
Any idea?
EDIT: the Best way to add generated files to distribution? question seems to be related, even though I doubt there's a built in mechanism for my specific case.

even though I doubt there's a built in mechanism for my specific case.
Actually, there is such a mechanism described here, about halfway down that page. The dist_ prefix is what you are looking for:
dist_man1_MANS = foo.1
dist_man5_MANS = foo.conf.5
if ASCIIDOC_AVAIL
foo.1: foo.1.txt
$(asciidoc) --doctype manpage --format manpage ./$<
foo.conf.5: foo.conf.5.txt
$(asciidoc) --doctype manpage --format manpage ./$<
CLEANFILES += $(dist_man1_MANS) $(dist_man5_MANS)
endif

OP here.
Thanks to to ldav1s's answer, I came out with a good definition for my man/Makefile.am file. I put it here as an answer, but the whole credit goes to ldav1s.
Here it goes:
dist_man1_MANS = foo.1
dist_man5_MANS = foo.conf.5
EXTRA_DIST = foo.1.txt foo.conf.5.txt
if ASCIIDOC_AVAIL
foo.1: foo.1.txt
$(asciidoc) --doctype manpage --format manpage ./$<
foo.conf.5: foo.conf.5.txt
$(asciidoc) --doctype manpage --format manpage ./$<
endif
CLEANFILES = $(dist_man1_MANS) $(dist_man5_MANS)
Some useful information about it:
The manpages foo.1 and foo.conf.5 are generated during make dist thanks to the dist_ prefix (as pointed out by ldav1s). The two manpages are included in the distribution tarball.
By having the sources foo.1.txt and foo.conf.5.txt in the EXTRA_DIST I get those two files to be distributed as well. This is required, since the distribution tarball would include only the compiled manpages.
Declaring CLEANFILES will result in make clean to delete the compiled manpages.
Just to get the idea, with this configuration I can run make dist and obtain a tarball with the following proprieties:
The tarball will already contain the compiled manpages together with the asciidoc sources.
Running ./configure && make immediately after extracting the tarball won't compile the manpages, as they are already available.
Running ./configure && make clean immediately after extracting the tarball will remove the compiled manpages (even if they were included in the distribution tarball);
I tried to verify the behaviour of the build system when asciidoc is installed, and when it is not installed: I get exactly what I wanted in the first place.
If asciidoc is not installed (thus ./configure won't detect it), running ./configure && make clean && make won't recompile the manpages
If asciidoc is installed, running ./configure && make clean && make will recompile the manpages.

Related

cmake - linking static library pytorch cannot find its internal functions during build

I'm trying to build a program using cmake. For several reasons, the program must be built using static libraries rather than dynamic libraries, and I need to use PyTorch so this is what I've done:
Downloaded and installed PyTorch static library (I've found libtorch.a in the proper path, in /home/me/pytorch/torch/lib)
Made CMakeLists.txt with the following contents:
cmake_minimum_required(VERSION 3.5.1 FATAL_ERROR)
project(example-app LANGUAGES CXX)
find_package(Torch REQUIRED)
add_executable(example-app example-app.cpp argparse/argparse.cpp)
target_link_libraries(example-app "${TORCH_LIBRARIES}" -static -fopenmp)
set_property(TARGET example-app PROPERTY CXX_STANDARD 14)
FYI, example-app.cpp is the file with the main function, and argparse/ is a directory with some source code for functions called in example-app.cpp
It works until cmake -DCMAKE_PREFIX_PATH=/home/me/pytorch/torch .., but the following build incurs some errors, saying it could not find the reference to some functions, namely functions starting with fbgemm::. fbgemm is (as long as I know) some sort of GEMM library used in implementing PyTorch.
It seems to me that while linking the static PyTorch library, its internal libraries like fbgemm stuff have not been linked properly, but I'm not an expert on cmake and honestly not entirely sure.
Am I doing something wrong, or is there a workaround for this problem? Any help or push in the right direction would be greatly appreciated.
P.S.
The exact error has not been posted because it is way too long, but it consists of mostly undefined reference to ~ errors. If looking at the error message might be helpful for some people, I'd be happy to edit the question and post it.
building and running the file works fine if I remove the parts that require the library's functions from the code without commenting out #include <torch/torch.h> from example-app.cpp.
Lately went through similar process with static linking of PyTorch and to be honest it wasn't too pretty.
I will outline the steps I have undertaken (you can find exact source code in torchlambda, here is CMakeLists.txt (it also includes AWS SDK and AWS Lambda static builds), here is a script building pytorch from source ( cloning and building via /scripts/build_mobile.sh with only CPU support)),
though it's only with CPU support (though similar steps should be fine if you need CUDA, it will get you started at least).
Pytorch static library
Pre-built static PyTorch
First of all, you need pre-built static library files (all of them need to be static, hence no .so, only those with .a extension are suitable).
Tbh I've been looking for those provided by PyTorch on installation page, yet there is only shared version.
In one GitHub issue I've found a way to download them as follows:
Instead of downloading (here via wget) shared libraries:
$ wget https://download.pytorch.org/libtorch/cu101/libtorch-shared-with-deps-1.4.0.zip
you rename shared to static (as described in this issue), so it would become:
$ wget https://download.pytorch.org/libtorch/cu101/libtorch-static-with-deps-1.4.0.zip
Yet, when you download it there is no libtorch.a under lib folder (didn't find libcaffe2.a either as indicated by this issue), so what I was left with was building explicitly from source.
If you have those files somehow (if so, please provide where you got them from please), you can skip the next step.
Building from source
For CPU version I have used /pytorch/scripts/build_mobile.sh file, you can base your version off of this if GPU support is needed (maybe you only have to pass -DUSE_CUDA=ON to this script, not sure though).
Most important is cmake's -DBUILD_SHARED_LIBS=OFF in order to build everything as static library. You can also check script from my tool which passes arguments to build_mobile.sh as well.
Running above will give you static files in /pytorch/build_mobile/install by default where there is everything you need.
CMake
Now you can copy above build files to /usr/local (better not to unless you are using Docker as torchlambda) or set path to it from within your CMakeLists.txt like this:
set(LIBTORCH "/path/to/pytorch/build_mobile/install")
# Below will append libtorch to path so CMake can see files
set(CMAKE_PREFIX_PATH "${CMAKE_PREFIX_PATH};${LIBTORCH}")
Now the rest is fine except target_link_libraries, which should be (as indicated by this issue, see related issues listed there for additional reference) used with -Wl,--whole-archive linker flag, which brought me to this:
target_link_libraries(example-app PRIVATE -lm
-Wl,--whole-archive "${TORCH_LIBRARIES}"
-Wl,--no-whole-archive
-lpthread
${CMAKE_DL_LIBS})
You may not need either of -lm, -lpthread or ${CMAKE_DL_LIBS}, though I needed it when building on Amazon Linux AMI.
Building
Now you are off to building your application. Standard libtorch way should be fine but here is another command I used:
mkdir build && \
cd build && \
cmake .. && \
cmake --build . --config Release
Above will create build folder where example-app binary should be now safely located.
Finally use ld build/example-app to verify everything from PyTorch was statically linked, see aforementioned issue point 5., your output should look similar.

Creating a Debian Package from CMake Project

I am considering to create a Debian package from an existing library (paho-mqtt-c). The project uses CMake as its build system. After some research I think I need to create two or three different packages:
libpaho-mqtt3 (with library .so files and related stuff)
libpaho-mqtt3-dev (with header files)
also maybe I need a third package with sample files or documentation (called paho-mqtt3?)
I have done some research and it seems there exist at least three different ways how I can create a Debian package when I use CMake as my build system:
Use debmake procedure described in Debian documentation (Chapter 8).
Use cmake-debhelper.
Use dh-cmake
I have looked into all three methods and it seems each has some advantages and disadvantages.
Debmake
As far as I have understood using debmake assumes I have an upstream tarball with the sources and the build system and then I invoke debmake on the extracted tarball. Afterwards I get a lot of templates which I need to manually adjust to fill in the missing gaps. I started doing this but it seems quite complex.
cmake-debhelper
I tried to use it but received lots of errors. The github page has an open issue with no solution so I stopped looking at this. This is also what the paho-mqtt-c build system is currently using, but it does not work due to the issue linked.
dh-cmake
I briefly looked into this and it seems to be the most modern solution and it should be possible to combine this with CPack. However, it seems dh-cmake is only available for Ubuntu 18.04 and 16.04, but I am using Ubuntu 19.10 so I was not able to install dh-cmake on my system.
Have I missed anything in my research? What are the recommended steps to create a Debian package from a software managed with CMake and which documentation is useful to read?
In short, on Ubuntu you need to create at least these files:
debian/
changelog
control
copyright
rules
And then run debuild and it will run cmake install to temporary folder and pack an installable deb package from it.
To quickly create those debian files run dh_make --createorig and press s for source package.
Then you'll need to carefully edit debian files as described in Chapter 4. Required files under the debian directory
of Debian New Maintainers' Guide.
If you need to set cmake properties or make any other configuration then you'll need to adjust override_dh_auto_configure in rules:
#!/usr/bin/make -f
# See debhelper(7) (uncomment to enable)
export DH_VERBOSE = 1
%:
dh $#
override_dh_auto_configure:
dh_auto_configure -- \
-DCMAKE_LIBRARY_PATH=$(DEB_HOST_MULTIARCH) \
-DIWINFO_SUPPORT=OFF
Here the -DCMAKE_LIBRARY_PATH=$(DEB_HOST_MULTIARCH) and -DIWINFO_SUPPORT=OFF will be directly passed to cmake.
You can then upload your package to Ubuntu PPA:
debuild -S -I
dput dput ppa:your-launchpad-user/your-ppa ../*_source.changes
After that PPA build bot will compile and publish your package to PPA and you'll see them on https://launchpad.net/~your-launchpad-user/+archive/ubuntu/your-ppa/+packages
Unfortunately there is a lot of other steps, I just described briefly.
The dh-cmake is needed for more sophisticated things. CPack won't work for you if you want to publish to PPA because its buildbot will anyway run debhelper (short version of debuild) so it needs for the debian folder
or you could use cpack with cmake to generate a deb fairly easy to do but cmake and cpack are poorly documented still they work well
I suggest adding the following to the bottom of CMakeLists.txt
# generate postinst file in ${CMAKE_BINARY_DIR} from template #
CONFIGURE_FILE("${CMAKE_SOURCE_DIR}/contrib/postinst.in" "postinst" #ONLY IMMEDIATE)
# generate a DEB when cpack is run
SET(CPACK_GENERATOR "DEB")
SET(CPACK_PACKAGE_NAME ${CMAKE_PROJECT_NAME})
SET(CPACK_SET_DESTDIR TRUE)
SET(CPACK_DEBIAN_PACKAGE_MAINTAINER "grizzlysmit#smit.id.au")
SET(CPACK_PACKAGE_VERSION_MAJOR "0")
SET(CPACK_PACKAGE_VERSION_MINOR "0")
SET(CPACK_PACKAGE_VERSION_PATCH "1")
include(GNUInstallDirs)
SET(CPACK_PACKAGE_DESCRIPTION_FILE "${CMAKE_SOURCE_DIR}/docs/CPack.Description.txt")
SET(CPACK_RESOURCE_FILE_README "${CMAKE_SOURCE_DIR}/docs/README.md")
SET(CPACK_RESOURCE_FILE_LICENSE "${CMAKE_SOURCE_DIR}/docs/LICENCE")
SET(CPACK_DEBIAN_PACKAGE_DEPENDS "libreadline8, libreadline-dev")
SET(CPACK_PACKAGE_VENDOR "Grizzly")
# make postinst run after install #
SET(CPACK_DEBIAN_PACKAGE_CONTROL_EXTRA "${CMAKE_BINARY_DIR}/postinst;")
include(CPack)
the postisnt is to run a script after the install see CMAKE/CPACK:I want to the deb executes a bash script after installed, but it doesn't work for more on that.

Meson build: Add dependency path for executable manually

What I'd like to do, is rather easy: Compile a project using the Meson build system + manually including a dependency.
However, there's one dependency, which I do not want to install to /usr/lib, due to System Integrity Protection on Mac. (I know I can turn this off; I don't want to.)
So basically I wanna do:
g++ -L[path_to_lib] [files...] but use meson instead of g++.
However, this seems to be super complicated. After doing some research and unsuccessfully adding
cc = meson.get_compiler('c')
dep = cc.find_library('granite' dirs: [ [path_to_dep] ])
to my meson.build file (which doesn't work, as it handles libraries, not dependencies)
I'm left feeling rather dumb.
Please help!
I know I could just add the relevant path to $PATH, but that is more than overkill and I refuse to believe that there isn't another nice quick way to do so. (As is with the ancient c compiler...)
You should be able to solve your problem without modifying meson.build file (I mean leave granite as ordinary dependency). meson uses pkg-config to search for dependencies, so if you add your non-standard path containing granite package config file to PKG_CONFIG_PATH it will find it. And in this case granite package config file should be correct, of course, i.e. contain correct library and header paths, which should be correct if you configure installation of granite with something like:
# Configure:
$ cmake -DCMAKE_INSTALL_PREFIX=/some/path...
# Build:
$ make
# Install (need sudo?):
$ make install
$ export PKG_CONFIG_PATH=/some/path...:$PKG_CONFIG_PATH
granite_dep = dependency('granite')
my_app = executable('my_app',
dependencies : [granite_dep]
...
However, note that in case of find_library() according to reference manual:
The result object can be used just like the return value of dependency
So, it should work:
granite_dep = cc.find_library('granite', dirs : [path])
executable(..., dependencies : granite_dep)
But, I recommend standard way that utilizes pkg-config, because granite can also have dependencies that you will not be able to automatically pick up this way.

Using ExternalProject_Add with ITK

I am trying a very simple ExternalProject usage against ITK. This will allow my automated jenkins slave to retrieve ITK directly instead of using a system installed library (thus I leave it as an option to use ExternalProject or not).
So I wrote the following piece of code:
set(ITK_PREFIX "${CMAKE_CURRENT_BINARY_DIR}/ITK")
set(ITK_INSTALL_PREFIX "${ITK_PREFIX}/install-$<CONFIG>")
ExternalProject_Add(ITK
URL http://sourceforge.net/projects/itk/files/itk/4.6/InsightToolkit-4.6.1.tar.xz
URL_MD5 d8dcab9193b55d4505afa94ab46de699
PREFIX ${ITK_PREFIX}
CMAKE_ARGS -DBUILD_SHARED_LIBS:BOOL=OFF -DBUILD_EXAMPLES:BOOL=OFF -DBUILD_TESTING:BOOL=OFF -DModule_ITKReview:BOOL=ON -DITK_USE_SYSTEM_GDCM:BOOL=ON -DCMAKE_INSTALL_PREFIX=${ITK_INSTALL_PREFIX} -DGDCM_DIR:PATH=${GDCM_INSTALL_PREFIX}
BUILD_COMMAND "${CMAKE_COMMAND}" --build . --target install --config $<CONFIG>
)
# include directory:
include_directories(${ITK_INSTALL_PREFIX}/include/ITK-4.6)
# link directory:
#link_directories(${ITK_INSTALL_PREFIX}/lib/) # $ sign is escaped
link_directories(${ITK_PREFIX}/install-/lib)
But then I fail to understand how I can possibly populate the following variable: ITK_LIBRARIES which I had been using throughout my codebase.
How should I write:
set(ITK_LIBRARIES
itksys-4.6
ITKCommon-4.6
ITKIOImageBase-4.6
ITKIOMeta-4.6
ITKIOGDCM-4.6
pthread
...? possibly others ? possibly different order ? ...
)
This feels like a hack, extremely hard to maintain, esp. considering that I need to link to static libraries (requirements for me).
Obviously the magical solution would be for me to run find_package(ITK) and be done. But since ExternalProject are done at build time and not configure time, I cannot make use of this (ref).
Because people feel it is duplicate, let me insist, on: "Yes I do understand that I cannot use find_package". My question is totally different, and is rather about the complex case of static linking.
So I should not be building the ordered list of static libraries in ITK_LIBRARIES, this is too complex. Instead I should be using the logic from a call to find_package(ITK).
I need to change the way I build my project and switch to a SuperBuild type solution.

Makefile: Avoid compiling Fortran modules for new folder

I have a Fortran program that uses modules, i.e. it creates .mod-files during compilation.
I also wrote a Makefile that uses all the .f90-files from src/ puts all created .o-files in obj/ and the binary in the current folder, and everything works fine.
I now recompile my program in different folders for different calculations (say calc1/), i.e. I copy the Makefile into calc1/, type make all in calc1/ and all it does is the linking, because the object-files already exist. However, if the program includes any modules the compiler needs the corresponding .mod-files to be present in calc1/. Until now, I recompiled everything (make clean all), but with the program growing this takes too much time!
A possible solution I came up with is to have one specific folder for the binaries (bin/). But this is not a viable option because I have jobs in the queue, which obviously need a stable binary, while I also try new features in the meantime.
So, I'm looking for a solution that somehow treats the .mod-files similar to .o-files, e.g. places them in obj/.
I would expect that (most?) compilers would specify an option to change the module file path. with gfortran, the option is -J or -M (from the man page):
-Jdir
-Mdir
This option specifies where to put .mod files for compiled modules. It is also
added to the list of directories to searched by an "USE" statement.
The default is the current directory.
-M is deprecated to avoid conflicts with existing GCC options.
I think that most compilers look for .mod files in directories included with -I
EDIT
As of gfortran 4.6, -M is no longer supported. Use -J
based on one of my configure scripts, the flag is -module for ifort and pgf90 although I almost never use those compilers these days so somebody else should confirm that ...