Zope buildout for development environment - zope

Is it possible to have Zope2 buildout unpack python files into their normal directories like how standard python modules do, and not under separate .egg directories? It makes looking for files easier when debugging.

The 'regular' setup doesn't support namespaced packages very well, where multiple eggs share a top-level name (such as plone.app and zope, etc.)
Use the collective.recipe.omelette buildout recipe to build a 'regular' setup, it uses symlinks to give you a searchable structure of all eggs used.
[buildout]
parts =
omelette
...
[omelette]
recipe = collective.recipe.omelette
eggs = ${instance:eggs}
You'll find the result in parts/omelette. Note that this structure uses symlinks, so if you use tools like find or ack make sure to configure them to follow symlinks (e.g. find parts/omelette -L and ack --follow).
The omelette directory structure is not used by Python itself, it is purely intended for presenting a coherent library structure from all eggs used in your buildout.
Note that for Windows, you need to install the junction utility as well for the recipe to work.

Related

How do I install local modules?

For creating and maintaining Perl 5 modules, I use Dist::Zilla. One of my favorite features is being able to install local modules.
However, with Perl 6, I'm not sure how to install local modules. Sure, I can use use lib:
use lib 'relative/path';
use My::Awesome::Module;
But, I'd really like to be able to install My::Awesome::Module, so that all I had to do was use it:
use My::Awesome::Module;
One way to accomplish this, would be setting PERL6LIB, but that still isn't "installing" a module like zef install ./My-Awesome-Module.
Update: Looks like I need to craft an appropriate META6.json file.
To understand how to setup a module to be understood by toolchain utilities, see Preparing the module. Typically this means adding a META6.json file that describes the distribution, including quasi-manifest elements such as which files you really meant to include/provide. Once the META6.json is created the module is ready to be installed:
zef install ./My-Awesome-Module
which (assuming no uninstalled dependencies) is essentially:
my $install-to-repo = CompUnit::RepositoryRegistry.repository-for-name("site");
my $preinstall-dist = Distribution::Path.new("./My-Awesome-Module");
$install-to-repo.install($preinstall-dist);
Starting with rakudo 2019.01 you can, assuming no uninstalled dependencies, install a local distribution without a META6.json -- but this is purely developmental nicety that won't work on complex setups that do not have e.g. namespacing and file structures that can be inferred.
my $read-from-repo = CompUnit::Repository::FileSystem.new(prefix => "./My-Awesome-Module/lib");
my $install-to-repo = CompUnit::RepositoryRegistry.repository-for-name("site");
my $some-module-name = "My::Awesome::Module"; # needed to get at the Distribution object in the next step
my $preinstall-dist = $read-from-repo.candidates($some-module-name).head;
$install-to-repo.install($preinstall-dist);
I'm writing a bin that may help you: http://github.com/FCO/6pm

How do I avoid absolute pathnames in my code when using Git?

Up till this point in my programming career, I have mostly worked on small projects, editing the PHP/Perl/Python files directly on the Linux dev host, then copying the files to the same directory on a live server to push them into production.
I'm trying to set it up now by learning how to use Git and also how to properly write code with Git in mind. I want to learn the best practices for this scenario:
Here's a project called "BigFun", which also uses some libraries with tasks that are common to many projects.
/home/projects/BigFun/bin/script1.pl
/home/projects/BigFun/bin/script2.pl
/home/projects/BigFun/bin/script3.pl
/home/projects/BigFun/lib/FunMaker.pm
/home/projects/common/lib/Utilities.pm
/home/projects/common/lib/Format.pm
If this program is not managed by Git, and if I know that the files will never be moved around, I could do something like this:
#!/usr/bin/perl
use warnings;
use strict;
use lib '/home/projects/BigFun/lib';
use FunMaker;
use lib '/home/projects/common/lib';
use Utilities;
But once this is managed by Git, then anyone will be able to check these files out and put them in their own development space. The hardcoded URLs will not work anymore if your development rootdir is "C:\My Projects\BigFun".
So, some questions:
I can probably assume that the BigFun "lib" directory will always be relative to the "bin" directory. So maybe I could change line 3 to use lib '../lib'; . Is that the right solution, though?
It seems likely, though, that this example code I've listed would be split up in to two repositories - one for BigFun, and the other as a "common" repo containing some tools that are used by many projects. When that happens, it seems to me that the BigFun code would have no way of knowing where to find the "common" libraries. /home/projects/common/lib is not at all guaranteed to work, and nor would ../../common/lib. What's the normal Git way to handle something like this?
I'm working my way through the "Pro Git" book, but I haven't (yet) found anything to answer these questions. Thanks to anyone who can point me in the right direction!
Your question is not about Git,
it's about collaboration.
Absolute paths force all users of your piece of software to use the same directory layout, and that's unacceptable. No decent software does that.
Avoiding absolute paths in software is the norm,
regardless of what version control system you use, or not use.
How to make your software work using strictly relative paths and never absolute paths? That depends on the software/framework/language.
For relative paths to make sense,
you need to consider the question: relative from where?
Here are some ideas as the anchor from which relative paths could be considered:
current working directory
user home directory
package installation directory
framework installation directory
Every language typically has some packaging mechanism.
The goal of packaging is that developers in the language can create a standard package, the content of which is organized in such a way that the standard tools of the language can install it,
adding the software to the system-wide libraries of the language,
or to custom user libraries,
or to a specified library location.
Once the software is installed, from a standard package,
it becomes ready to use just like any other installed software.
In your example,
use warnings; and use strict; work without any setup because these libraries are installed in the system.
The system finds their location relative to the installation directory of Perl. Roughly speaking.
So what you need to do is:
Figure out how to package a Perl library
Figure out how to install a Perl package
Once your FunMaker and Utilities are installed as standard Perl packages, you will be able to simplify your script as:
#!/usr/bin/perl
use warnings;
use strict;
use FunMaker;
use Utilities;
You will of course have to document the dependencies of the script (FunMaker, Utilities),
as well as how to install them (especially the location where these packages will be hosted).

Can a git repository have N working trees

I try to write a file store based on libgit2.
Software snapshots should be saved as branches mysoftware and specific versions committed and tagged. Then later I want to checkout the tags to different directories.
When looking at git_checkout_tree, it seems like there is only one working tree for a repository and thus it does not even seem possible to checkout multiple working trees concurrently.
Is this correct!?
EDIT:
Additionally, I would like for this thing to work on Windows without the need for cygwin!
The git_checkout_opts structure in libgit2 contains a target_directory option that will allow git_checkout_tree() to write to a different directory instead of using the default working tree for the repository. This would allow you to custom build a solution with libgit2 that maintained multiple checked out copies.
Without using that option, a libgit2 git_repository object expects there will be just one working directory and looks to the core.worktree config option if it isn't the "natural" working directory for the repository.
The git-new-workdir tricks with symlinks in the .git directory don't work great with libgit2 right now, I'm afraid, and particularly doesn't work well on Windows. I'd love to see this addressed, but it isn't too high on my priority list.
Git doesn't support this natively, but you can use git-new-workdir to do it.

How to create RPM subpackages using the same paths for different envs?

I would like to use a rpm to build subpackages for different environments (live,testing,developer) but for the same files, so having a package called name-config-live, one called name-config-testing and one called name-config-developer and in them to have the same paths but each with the configs corresponding to the environment it's named after.
as an example
let's say on all environments I have a file called /etc/name.conf and on testing I want it to contain "1", on development "2" and on live "3". Is it possible to do this in the same spec since the subpackage generation only happens last not in the order I enter it. ( and hopefully not with %post -n )
I tried using BuildRoot but it seems that's a global attribute
I don't think there's a native way; I would do a %post like you had noted.
However, I would do this (similar to something I do with an internal-only package I develop for work):
Three separate files /etc/name.conf-developer, /etc/name.conf-live, etc.
Have all three packages provide a virtual package, e.g. name-config
Have main package require name-config
This will make rpm, yum, or whatever require at least one be installed in the same transaction
Have all three packages conflict with each other
Have each config package's %post (and possibly %verify) symlink /etc/name.conf to the proper config
This also helps show the user what is happening
Cons:
It's a little hackish
rpm --whatprovides /etc/name.conf will say it is not owned by any package

Specify a custom PYTHON_EGG_CACHE dir with zc.buildout?

We're having problems when trying to deploy a number of projects which use zc.buildout - specifically we're finding that they want to put their PYTHON_EGG_CACHE directories all over the show. We'd like to somehow set this directory to one at the same level as the built-out project, where eggs can be found.
There is some mention online that this can be done for Plone projects, but is it possible to do this without Plone?
Are there some recipes that can set up an environment variable so we can set the PYTHON_EGG_CACHE executable files in ./bin?
The PYTHON_EGG_CACHE is only used for zipped eggs, your best bet is to have zc.buildout install all required eggs unzipped:
[buildout]
...
unzip = true
If your system python has zipped eggs installed that still require unzipping for resource access, and setting the PYTHON_EGG_CACHE in your scripts is your only option (as opposed to setting the environment variable for your user), you could try to use the initialization option of zc.recipe.egg to add arbitrary Python code to your scripts:
[a-part]
recipe = zc.recipe.egg
...
initialization =
import os
os.environ['PYTHON_EGG_CACHE'] = '/tmp/python_eggs'
I'm not sure what you mean. Three options that you normally have:
Buildout, by default, stores the eggs in a directory called eggs/ inside your buildout directory.
You can set the eggs-dir variable inside your buildout.cfg's [buildout] section to some directory. Just tell it where to place them.
You can also set that very same option in .buildout/defaults.cfg inside your home directory. That way you can set a default for all your projects. Handy for storing all your eggs in one place: that can save a lot of download time, for instance.
Does one of those (especially the last one) accomplish what you want?
And: don't muck around with eggs in the generated bin/* files. Let buldout pick the eggs, that's its purpose.