Nested buildout packages with mr.developer - Recursive buildout - buildout

I am using mr.developer to checkout my packages from a mercurial repository, but I must be doing something wrong since I have a problem with nested dependencies.
For example, if I have foo with the following
[buildout]
develop = .
extensions = mr.developer
sources = sources
auto-checkout =
pack1
parts = foo
[sources]
pack1 = hg http://blah.com/hg/pack1
foo has a dependency on pack1, listed in setup.py as install_requires = ['pack1'],
When I run bin/buildout, everything goes smoothly, mr.developer downloads pack1, and foo gets created without issues since pack1 has been downloaded, and therefore exists.
Now, I have another package, bar, which lists foo as a dependency.
[buildout]
develop = .
extensions = mr.developer
sources = sources
auto-checkout =
foo
parts = bar
[sources]
foo = hg http://blah.com/hg/foo
I also list foo as a dependency in my setup.py by doing install_requires = ['foo'],
What happens now is the part that I do not understand.
When I run bin/buildout, mr.developer goes and fetch foo, but doesn't seams to execute buildout.cfg located inside of foo/ .
As a result, foo/setup.py requires pack1, which doesn't exist.
How to make sure that mr.developer actually goes and fetch pack1 on http://blah.com/hg/pack as indicated in the foo/buildout.cfg?
I would like to be able to nest multiple packages like this, without having to go in depth into each package and run buildout manually.
Cheers,
Martin

You are misunderstanding how buildout works.
Normally, buildout will try and find all eggs needed to build your parts for you. It does so by searching for the eggs (optionally pinned to specific versions) in your site-packages, on PyPI or in any additional web locations (using find-links).
It will do so recursively until all dependencies are met. So if you specify you want to use an egg called foo that depends on bar, which in turn depends on spam and bacon, buildout will locate those four eggs for you.
Note that eggs are special python packages, using the .egg extension. If there is a python package with a setup.py file instead, that specifies the correct name, then that setup.py is executed to create an egg on the fly.
This is were development eggs come in; they are python packages that do not need to be downloaded from elsewhere, because they are already present on the filesystem. Their version requirements are not enforced, and if present they take precedence over other versions of the egg found elsewhere. When buildout runs, their setup.py is run to build an egg in-place; you'll find a .egg-info directory in that package when buildout has run and some more metadata is stored in the develop-eggs directory of your buildout.
In your examples you use mr.developer to manage your development eggs, loading them from a mercurial repository first. Buildout itself doesn't really care about this, it's just a (clever) means of loading python packages from a SCM repository and treating them as python eggs.
All you need to do is list all dependencies coming from mercurial in [sources] and in auto-checkout (one per line). In your case the dependencies run bar -> foo -> pack1, and by listing foo and pack1 both in mr.developer-controlled configurations you ensure that buildout will find development eggs for both of these.
In all this it is important to remember that one buildout configuration is all that is needed; buildout does not run buildout configuration files found inside packages. It only deals with python eggs, not other buildout configurations. You do sometimes find buildout configuration files inside python eggs, but these are there for the developer of the egg, to run tests and aid development, not to pull in dependencies when used as an egg in your own projects.

Related

How to handle package dependencies for some build targets when building with Copr?

I want to have a rpm package build with Copr1. My current build target list is Fedora 35, 36, rawhide and Centos 7 and Stream 8. I have not yet created the copr project.
Compiling on one of my machines, the package builds successfully on the Fedora variants with mock. The problem is that on Centos variants one of the build dependencies and some of its dependencies are not available. I have found appropriate srpm files and compiled them on one of my machines with the Centos Stream 8 (one of them required two custom patches). With those custom dependencies I am able to successfully compile the original package.
So just to be clear, the problem is that the spec file contains for example
BuildRequires: libsomething
where libsomething is available as a plain upstream package in some of the build targets while needs an additional custom repo for some other build targets.
The FAQ says the following about dependencies:
Can I depend on other packages, which are not in Fedora/EPEL?
Yes, they just need to be available in some yum repository. It can either be another Copr repo or a third-party yum repo (e.g jpackage). Click on “Edit” in your project and add the appropriate repositories into the “Repos” field. Packages from your project are available to be used at build time as well, but only for the project you are currently building and not from your other projects.
But this sounds like an all or nothing approach, and I absolutely do not want to override the already existing upstream packages, only provide them when they are missing.
So what strategy do people use to handle this?
Update: I have now created copr projects and made some attempts at building (after resolving dependencies of dependencies in several levels), but the problem is as I describe above. If I add copr://hlovdal/projectname as a build dependency then epel-8-x86_64 compiles fine because it is provided with the missing dependencies while fedora-35-x86_64 fails because the repository does not have any fedora packages. If I remove the repo epel fails while fedora succeeds.
I also attempted to add the base url from the corresponding /etc/yum.d.repo file, and only hardcode epel instead of $distname hoping that the fedora builds would just ignore non-existing/wrong repo setting, but the build does not like that and still fails.
1 Copr is Fedora's freely available build system.

How do I tell ReadTheDocs to build my project packages from a sub-directory?

I have a repository that contains three python packages: a main package, and two addon packages, with shared documentation. Each project is in its own directory, with its own setup.py, as so:
Repository
Main Project
setup.py
Addon One
setup.py
Addon Two
setup.py
Documentation
RST files, RTD conf, etc.
Previously, I was using setuptools.find_packages() to build my packages, but was having issues with the contents of the packages bleeding together, as they shared namespaces. So I switched to specifying the packages I wanted to build, such as
packages=["Main Package"]
However, this broke my ReadTheDocs auto-build, where I had specified
- method: setuptools
path: Main Project
in .readthedocs.yml, with RTD now complaining my package (inside the Main Project directory) doesn't exist, as it attempts to build it.
In my project, I use a script to build the packages, where I move into each directory, run its setup, then move out. Works fine, my packages and documentation all build locally. However, it looks like RTD is only using the defined path to prepend my setup.py script, and therefore not finding the source package as the working directory is the parent directory (but I could be wrong!).
I've read through the documentation, including https://github.com/readthedocs/readthedocs.org/issues/4354 where the feature was originally added, but I have not been able to find a solution myself yet.
How can I tell RTD to change directory before building the packages, or is there an alternative approach that will support my repo structure?
Repository in question is here.
I found a solution:
I changed my local build script to use the root project directory, as per-RTD. I then added the directive package_dir={"": "[directory]"} to the setuptools.setup() calls in each project's setup.py.

Find secondary dependencies in CMake config files

Following modern CMake guidelines (e.g. see https://www.slideshare.net/DanielPfeifer1/effective-cmake, particularly slide 46), I am trying to write out a PkgConfig.cmake file for my Pkg.
Pkg depends on Foo which in turn depends on Bar. Neither Foo nor Bar have config files - rather I am using FindFoo.cmake and FindBar.cmake to find them.
My PkgConfig.cmake file looks like this
set(Pkg_LIBRARIES Pkg::Pkg)
include(CMakeFindDependencyMacro)
find_dependency(Foo) # Use FindFoo.cmake find and import as target Foo::Foo
# Foo depends on Bar which is similarly imported using
# FindBar.cmake as target Bar::Bar
include("${CMAKE_CURRENT_LIST_DIR}/PkgTargets.cmake")
My resultant PkgTargets.cmake looks like
add_library(Pkg::Pkg STATIC IMPORTED_
set_target_properties(Pkg::Pkg PROPERTIES
INTERFACE_LINK_LIBRARIES "Foo::Foo")
# Load information for each installed configuration
.
.
.
My question is how can I avoid other packages importing Pkg into their project from having to specify where Foo and more importantly where Bar is to be found?
Doesn't it defeat the purpose of building transitive dependencies if the locations of Foo and Bar packages have to be specified again either through variables Foo_ROOT and Bar_ROOT or CMAKE_PREFIX_PATH?
My Pkg already knows where it was found, so should I parse/set Foo_ROOT and Bar_ROOT and put it into my PkgConfig.cmake file?
My question is how can I avoid other packages importing Pkg into their project from having to specify where Foo and more importantly where Bar is to be found?
It is perfectly allowed for PkgConfig.cmake to specify (hint) locations of its dependencies.
My Pkg already knows where it was found, so should I parse/set Foo_ROOT and Bar_ROOT and put it into my PkgConfig.cmake file?
Note, that XXXConfig.cmake files are generally prepared for the installed project to be moved into other directory on the build machine or, more important, to be copied into other machine and be used there.
Because location of Foo and Bar on the other machine may differ from one on the build machine, knowing their locations on build machine cannot help to find them on the target machine.
Nevertheless, it is up to you (as the project's developer) to specify project's usage constraints. You may, e.g., specify that the project can be used only on that machine where it has been built. In that case reusing build locations of Foo and Bar in the PkgConfig.cmake script is justified.
Moreover, even when allowing to copy the installed project into other machine, you still can use build locations of Foo and Bar as a hint for search them in the PkgConfig.cmake. So, if the project will be used on the same machine where it was built, then dependencies will be found without an user intervention. The same is true if the project will be copied to the other machine which has Foo and Bar on the same locations as on the build machine.

Built two different Debian packages for different Build Types using CMake

I have a small CMake project with different Build Types debug and release. I'm also providing a Debian package for this project. Building the Debian Package for release and providing it on my own Debian repository works perfect.
Now I also want to provide another Debian package for debug, due to debugging purposes, with a different package name. For example, my project is called myproject, and the debugging package should be myproject-debug.
I already read documentation about how to solve this in the debian/control file. I want to use Replaces: ... on each package vice versa, so that you can install only one of the both packages at a time. So either myproject or myproject-debug, but not both at the same time, to use the exact same files and filenames but only the binary has more debugging informations and debug prints in the myproject-debug package. Everything else should be the same. Same filename, same paths, etc.
Now the problem is that I don't know how the debian/rules file should look like, to first build the myproject package in a folder and then build the myproject-debug with different CMake options (-DCMAKE_BUILD_TYPE=debug) in a different folder, so the filenames can and should stay the same.
There is this CMake tutorial in the Debian documentation, but this doesn't fit my requirements. Because in this tutorial everything will be built in only one folder, and in this one folder there are different files. Then different .install files will be used to copy the needed files to each package. But since I have the same binary filename for each package myproject and myproject-debug this tutorial does not really fit my needs.
I already have the following lines in my debian/rules file:
override_dh_auto_configure:
dh_auto_configure -- -DCMAKE_BUILD_TYPE=release
But how can I run two different builds with two different build types?
For example, something like this, to split it up:
override_dh_auto_configure_release:
dh_auto_configure -- -DCMAKE_BUILD_TYPE=release
override_dh_auto_configure_debug:
dh_auto_configure -- -DCMAKE_BUILD_TYPE=debug
And run both in different folders so I can add both folders to two different packages.
Or maybe there is even a better solution I cannot imagine yet?

A layout for maven project with a patched dependency

Suppose, I have an opensource project that depends on some library, that must be patched in order to fix some issues. How do I do that? My ideas are:
Have that library sources set up as a module, keep them in my vcs. Pros: simple. Cons: some third party sources in my repo, might slow down build process, hard to find a patched place (though can be fixed in README)
Have a module, like in 1, but keep patched source files only, compile them with orignal library jar in classpath and somehow replace *.class files in library jar on build. Pros: builds faster, easy to find patched places. Cons: hard to configure, that jar hackery is non-obvious (library jar in repository and in my project assembly would be different)
Keep patched *.class files in main/resources, and replace on packaging like in 2). Pros: almost none. Cons: binaries in vcs, hard to recompile a patched class as patch compilation is not automated.
One nice solution is to create a distinct project with patched library sources, and deploy it on local/enterprise repository with -patched qualifier. But that would not fit for an opensourced project that is meant to be easily buildable by anyone who checks out its sources. Or should I just say "and also, before you build my project, please check out that stuff and run mvn install".
One nice solution is to create a distinct project with patched library sources, and deploy it on local/enterprise repository with -patched qualifier. But that would not fit for an opensourced project that is meant to be easily buildable by anyone who checks out its sources. Or should I just say "and also, before you build my project, please check out that stuff and run mvn install".
This is what I would do (and actually what I do) for both a corporate and an opensource project. Get the sources, put them under version control in a distinct project, patch them, rebuild the patched library (and include this information in the version, something like X.Y.Z-patched), deploy it to a repository (you could use SVN for this, a la Google Code1), declare the repository in your POM and update the dependency to point on your patched version.
With this approach, you can say to your users: check out my code and run mvn install and they will just get the patched version without any extra action. This is IMHO the cleanest way (not error prone, no class path order mess, no increase of the build time, etc).
1 Lots of people are deploying their code to their hosted subversion repository (how-to in this post).
One nice solution is to create a distinct project with patched library sources, and deploy it on local/enterprise repository with -patched qualifier. But that would not fit for an opensourced project that is meant to be easily buildable by anyone who checks out its sources. Or should I just say "and also, before you build my project, please check out that stuff and run mvn install".
I'd agree with this and Pascal's answer. Some additional notes:
you may use dependency:unpack on the original artifact and then combine that with your compiled classes if you don't want to rebuild the whole dependant project
in either case, your pom.xml will need to correctly represent the dependencies of that library
you can still integrate this as part of your project's build to avoid the 'deploy to a repository' step
make sure you honour the constraints of the project's license when doing all this!