Conan, C++ package manager, don't work for boost - conan

I run: conan install Boost/1.64.0#conan/stable, and it fails.
Output:
C:\temp>conan install Boost/1.64.0#conan/stable
Boost/1.64.0#conan/stable: Not found in local cache, looking in remotes...
Boost/1.64.0#conan/stable: Trying with 'bintray'...
Boost/1.64.0#conan/stable: Trying with 'conan.io'...
ERROR: Unable to find 'Boost/1.64.0#conan/stable' in remotes
Trying other package, works:
C:\temp>conan install fmt/4.0.0#bincrafters/stable
fmt/4.0.0#bincrafters/stable: Not found in local cache, looking in remotes...
fmt/4.0.0#bincrafters/stable: Trying with 'bintray'...
fmt/4.0.0#bincrafters/stable: Trying with 'conan.io'...
Downloading conanmanifest.txt
[==================================================] 121B/121B
Downloading conanfile.py
[==================================================] 1.8KB/1.8KB
fmt/4.0.0#bincrafters/stable: Installing package
Requirements
fmt/4.0.0#bincrafters/stable from conan.io
Packages
fmt/4.0.0#bincrafters/stable:63da998e3642b50bee33f4449826b2d623661505
fmt/4.0.0#bincrafters/stable: Retrieving package 63da998e3642b50bee33f4449826b2d623661505
fmt/4.0.0#bincrafters/stable: Looking for package 63da998e3642b50bee33f4449826b2d623661505 in remote 'conan.io'
Downloading conanmanifest.txt
[==================================================] 938B/938B
Downloading conaninfo.txt
[==================================================] 491B/491B
Downloading conan_package.tgz
[==================================================] 159.8KB/159.8KB
fmt/4.0.0#bincrafters/stable: Package installed 63da998e3642b50bee33f4449826b2d623661505
Any idea why the package isn't found?
How to debug it?

Conan is a decentralized package manager (kind of git-like style), so it can have many remotes. By default it comes configured with 2 remotes:
conan-transit: Is a read-only copy of the old conan.io repository, which contains many different Boost packages, from different authors. Quality is variable, so some packages might work only for certain OS, or might fail for some configurations.
conan-center: It is a moderated/reviewed repository, package creators can submit inclusion requests to share their packages with the community.
So far conan-transit contains several Boost/1.64 packages, so can check it with:
$ conan search Boost* -r=conan-transit
$ conan search Boost* -r=conan-center
As you can see the package you are trying to install doesn't exist in these repositories.
As I said above, conan is decentralized, so you can use different remotes. For example, the "bincrafters" community has a bintray repo that can be added with:
$ conan remote add bincrafters https://api.bintray.com/conan/bincrafters/public-conan
$ conan search Boost* -r=bincrafters
You will see they have a large number of Boost/1.64 packages, because they have created a modularized version of boost, in which every library lives in a different package, so you only get installed what you need.
UPDATE: Packages in the central repository are being renamed by the community to lowercase. Try with boost lowercase in the above if necessary.

Related

How to handle package dependencies for some build targets when building with Copr?

I want to have a rpm package build with Copr1. My current build target list is Fedora 35, 36, rawhide and Centos 7 and Stream 8. I have not yet created the copr project.
Compiling on one of my machines, the package builds successfully on the Fedora variants with mock. The problem is that on Centos variants one of the build dependencies and some of its dependencies are not available. I have found appropriate srpm files and compiled them on one of my machines with the Centos Stream 8 (one of them required two custom patches). With those custom dependencies I am able to successfully compile the original package.
So just to be clear, the problem is that the spec file contains for example
BuildRequires: libsomething
where libsomething is available as a plain upstream package in some of the build targets while needs an additional custom repo for some other build targets.
The FAQ says the following about dependencies:
Can I depend on other packages, which are not in Fedora/EPEL?
Yes, they just need to be available in some yum repository. It can either be another Copr repo or a third-party yum repo (e.g jpackage). Click on “Edit” in your project and add the appropriate repositories into the “Repos” field. Packages from your project are available to be used at build time as well, but only for the project you are currently building and not from your other projects.
But this sounds like an all or nothing approach, and I absolutely do not want to override the already existing upstream packages, only provide them when they are missing.
So what strategy do people use to handle this?
Update: I have now created copr projects and made some attempts at building (after resolving dependencies of dependencies in several levels), but the problem is as I describe above. If I add copr://hlovdal/projectname as a build dependency then epel-8-x86_64 compiles fine because it is provided with the missing dependencies while fedora-35-x86_64 fails because the repository does not have any fedora packages. If I remove the repo epel fails while fedora succeeds.
I also attempted to add the base url from the corresponding /etc/yum.d.repo file, and only hardcode epel instead of $distname hoping that the fedora builds would just ignore non-existing/wrong repo setting, but the build does not like that and still fails.
1 Copr is Fedora's freely available build system.

How do I tell ReadTheDocs to build my project packages from a sub-directory?

I have a repository that contains three python packages: a main package, and two addon packages, with shared documentation. Each project is in its own directory, with its own setup.py, as so:
Repository
Main Project
setup.py
Addon One
setup.py
Addon Two
setup.py
Documentation
RST files, RTD conf, etc.
Previously, I was using setuptools.find_packages() to build my packages, but was having issues with the contents of the packages bleeding together, as they shared namespaces. So I switched to specifying the packages I wanted to build, such as
packages=["Main Package"]
However, this broke my ReadTheDocs auto-build, where I had specified
- method: setuptools
path: Main Project
in .readthedocs.yml, with RTD now complaining my package (inside the Main Project directory) doesn't exist, as it attempts to build it.
In my project, I use a script to build the packages, where I move into each directory, run its setup, then move out. Works fine, my packages and documentation all build locally. However, it looks like RTD is only using the defined path to prepend my setup.py script, and therefore not finding the source package as the working directory is the parent directory (but I could be wrong!).
I've read through the documentation, including https://github.com/readthedocs/readthedocs.org/issues/4354 where the feature was originally added, but I have not been able to find a solution myself yet.
How can I tell RTD to change directory before building the packages, or is there an alternative approach that will support my repo structure?
Repository in question is here.
I found a solution:
I changed my local build script to use the root project directory, as per-RTD. I then added the directive package_dir={"": "[directory]"} to the setuptools.setup() calls in each project's setup.py.

Installing GConf on MSYS2

I try to build some project with MSYS2/MinGW64.
configure fails with:
checking for gconftool-2... no
configure: error: gconftool-2 executable not found in your path - should be installed with GConf
I have found no prebuild distributive at the home page.
I have tried to guess a package name for pacman:
gconf
gconf2
gconf-devel
mingw-w64-x86_64-gconf
and so on.
No success.
Also I have not found GConf package in msys2 package list.
Should I build GConf from sources or is there other way to get it?
I could not find gconf in the MINGW-packages repository so we do not have a recipe for building it from source. You might consider developing such a recipe. If it works for you, you could submit a pull request and maybe the MSYS2 maintainers will accept it and build it.

Building a 64bit Debian package on 32bit Ubuntu

I am trying to build a .deb package for an application my company (and me) have been developing.
I'm trying to create a 64bit package on my 32bit ubuntu (12.04 LTS) using dpkg-buildpackage and I get the following warnings/errors:
dpkg-shlibdeps: warning/error: couldn't find library X needed by Y.so (ELF format: 'elf64-x86-64'; RPATH: 'some/path/that/does/not/exist')
When X is one of our compiled shared libraries, we get a warning. When it's a system library (like libgcc_s.so.1 and libstdc++.so.6) we get an error.
Why is the RPATH refers to a path that does not exist?
By the way, when I make a 32bit package (on our files that were compiled for 32bit of course) it only shows warnings (only about our proprietary .so files) but creates the .deb file.
If I could, I would have posted my debian folder content but I cant take files out of our network. I can type the relevant parts if its needed.
You need to install the 64-bits version of the library with apt-get (actually anything do, but this is the most easy):
sudo apt-get install libyouneed-dev:amd64
The trick here is the :amd64, which tells the package manager to install the 64-bit version of that package. The same applies for 32-bits libraries in 64-bit systems. It's called multiarch.
The package is looking at that path because that is where the libraries of 64-bits (or 32-bits) gets stored, but since you don't have it installed the path do not exist.
Install an amd64 chroot environment and build your package in there. This way you avoid the various multi-arch pitfalls, with the added benefit of having a clean, reproducible build.
There is a tool that makes this very easy: mk-sbuild.
You need to install ubuntu-dev-tools and sbuild.
Then, run mk-sbuild --arch=amd64 precise, which will setup the build environment for you.
Add yourself to the sbuild group: adduser <your user name> sbuild
Log out and log back in so your group membership will be reflected.
You can then build your package in the chroot:
sbuild -d precise --arch=amd64 name_of_package.dsc
This assumes you've already build the source package with debuild -S or similar.

Creating an rpm package from source tarball - but tarball contains no spec file, just an install script

During my time as a SysAdmin, I have encountered applications that provide no rpm packages to install on redhat-based distributions - only a source tarball. The source tarball does not provide a spec file which would simplify the process of rpm package creation. Instead, the source tarball only provides a bash/ksh/ script which must be executed as root to install the application on the system.
I have tried to create an rpm package which essentially runs the install script to perform the install script. I have also tried to do the right thing by attempting to rpmbuild the package as a non-root user, and modifying the install script as best i can to ensure that the script's install dirs refer to rpm enviromentals/macros. But with a complicated install script that is a few hundred lines long, which also calls other scripts into play... well, I was bound to fail in this endeavour.
Is there a better way to package up such .spec-less source tarballs? Would a better solution be to:
somehow take a snapshot of the system before the application install
install the source tarball with the provided install script
take a snapshot of the system after the installation, and determine the changes/additions made by the installation
put the list of changes/additions in a spec file, and create an rpm package this way?
Any useful/relevant/instructive/amusing/profound input and advice to this problem will be much appreciated.
Thank you in advance
I think the best solution is to request the upstream to provide a spec file for the applications. Another way could be request experience package maintainer to package the application. Or you can probably explore to checkinstall as it tracked the install file using Makefile. If you want to package yourself, you should read this link as it provides explanation and with many examples.
I've seen many scripts like you mention, and I suspect I can tell you with certainty the company selling you these horrible installs.
Your best bet is to request a proper installable package. Read up on why a package is better than a configure;make;make-install and is better than this install.sh nonsense. Armed with that, even if it's talking points, try and get them into the 3rd millennium . It will be hard, but ultimately most rewarding.
Barring that, you will need that package built (and by now you will have read why). Unfortunately, there are some apps and vendors whose payload can't be packaged, because they compile, query the destination host, look for a license, compile a bit more, etc. My bias here is well-honed.