How can I simplify my stack of package managers? - package-managers

I don't know how it got this bad. I'm a web developer, and I use Ubuntu, and here are just some of the package managers I'm using.
apt-get for system-wide packages
npm for node packages
pip for python packages
pip3 for python 3 packages
cabal for haskell packages
composer for php packages
bower for front-end packages
gem for ruby packages
git for other things
When I start a new project on a new VM, I have to install seemingly a dozen package managers from a dozen different places, and use them all to create a development environment. This is just getting out of control.
I've discovered that I can basically avoid installing and using pip/pip3 just by installing python packages from apt, like sudo apt-get install python3-some-library. This saves from having to use one package manager. That's awesome. But then I'm stuck with the Ubuntu versions of those packages, which are often really old.
What I'm wondering is, is there a meta-package manager that can help me to replace a few of these parts, so my dev environment is not so tricky to replicate?

I had a thought to make a package manager to rule them all for that very reason. Never finished it though, too much effort required to stay compatible. For each package manager you have a huge community supporting it's upkeep.
Best advice I have is to try to reduce your toolchain for each type of project. Ideally you shouldn't need to work in every language you know for each project you work on. How many projects are you using that use both python 2 and python 3 simultaneously?
Keep using apt for your system packages and install git with it. From there try to stick to one language per project. AFAIK all of the package managers you listed support installing packages from git. The languages you mentioned all have comparable sets of tooling, so use the toolchain available for the target language.
I worked with a team that was using composer, npm, bower, bundler, maven, and a tar.gz file for frontend SPAs because those are the tools they knew. On top of all of that, they were using vagrant simply as a deployer. We considered our toolchain and described our need and realized that it could be expressed in a single language once we adopted appropriate tooling for the task at hand.

Related

serverless AWS deployment fails because of esbuild

I'm using a Macbook M1 for development. When I try to deploy my serverless application to AWS I'm running into this error:
I'm using the aws-nodejs-typescript template.
You installed esbuild for another platform than the one you're currently using.
This won't work because esbuild is written with native code and needs to
install a platform-specific binary executable.
Specifically the "esbuild-darwin-arm64" package is present but this platform
needs the "esbuild-darwin-64" package instead. People often get into this
situation by installing esbuild with npm running inside of Rosetta 2 and then
trying to use it with node running outside of Rosetta 2, or vice versa (Rosetta
2 is Apple's on-the-fly x86_64-to-arm64 translation service).
If you are installing with npm, you can try ensuring that both npm and node are
not running under Rosetta 2 and then reinstalling esbuild. This likely involves
changing how you installed npm and/or node. For example, installing node with
the universal installer here should work: https://nodejs.org/en/download/. Or
you could consider using yarn instead of npm which has built-in support for
installing a package on multiple platforms simultaneously.
If you are installing with yarn, you can try listing both "arm64" and "x64"
in your ".yarnrc.yml" file using the "supportedArchitectures" feature:
https://yarnpkg.com/configuration/yarnrc/#supportedArchitectures
Keep in mind that this means multiple copies of esbuild will be present.
Another alternative is to use the "esbuild-wasm" package instead, which works
the same way on all platforms. But it comes with a heavy performance cost and
can sometimes be 10x slower than the "esbuild" package, so you may also not
want to do that.
Why is this happening?
I needed to reinstall serverlesss....
for that I deleted the folder ./serverless in the directory ~
and reinstalled using homebrew
https://formulae.brew.sh/formula/serverless#default

Difference apache installation via apt-get install and configure, make, make install

recently I tried to install an apache webserver on an ubuntu machine. I discovered two ways to do that.
The first one is to install it like it is described at this source apache docs using ./configure, make, make install to the downloaded apache sources.
And the second way is to install it via apt-get update && apt-get install apache2.
I also noticed that when I run the installation via apt-get it seems that there is a different configuration of the apache for example the directory /etc/apache2/* is only available over this way of installation. So when I install it manually the directories sites-available, sites-enabled, ... are just missing.
Is there also a way to get these folders when running the manual installation?
Where do this differences come from?
This is nothing that can be summed up in a few sentences. You basically ask: how does software management work under Linux systems? In short:
using the apt utility on a Ubuntu based system you are using the systems package management system, that should nearly always be what you want. That way the system can take care to keep your apache installation up to date, you can remove software again without leaving artefacts, the software is guaranteed to work with the system libraries already installed in your system. Potential conflicts are resolved. You can be certain that the configuration matches your system.
using the build system is a more generic (archaic) way: you do not only install the software, you build it from scratch from the software sources prior to installing it. That is only possible for OpenSource software, obviously. This certainly allows for more flexibility. But you are responsible yourself for a whole lot of things, starting with first setting up a complete build system, then configuring the package, select what you actually want to build and last but not least you are yourself responsible to update software you installed that way. This rarely is a good idea, with two exceptions:
you absolutely cannot find a pre build package for your Linux distribution or
you want to make own modifications to the software itself
The difference in the folder layout of the apache configuration is a separate thing. There are two aspects that come into play here:
you can change that layout using the uncounted build options offered by the build system (namely the options you can hand over to the configure utility on the command line).
typically the configuration of complex software packages (like the apache http server with all it's modules) is broken up into
various sub folders so that you can keep an overview and
allows additional packages to drop their additional configuration in place (this is mostly useful for prebuild packages)
Long story short: in 99,8% of all cases you want to use prebuild packages prepared for your distribution. That is the power of the software management systems under Linux (that still have no comparable counterpart in other operating systems).

Best way to write setup script for multi-language project package that includes anaconda, atom, node.js etc.?

I am designing an environment for productive research, i.e. writing, data-analysis, publication, etc.
In order to share the final results with others, I need to find a way to package this and to set up the local installation.
The project depends on Anaconda, so conda as a package manager is available.
It also includes
Pandoc and some pandoc packages, some will have to be fetched from Github directly because some versions are not available via conda-forge (doable in conda)
Atom and Atom packages; they should be installed and configured by my script (this works on the CLI via the apm package manager)
Node.js and Mermaid and a few other JS packages, which require npm calls
Some file-system-level operations, like deleting parts from packages where I only need a portion from, creating symlinks and aliases etc.
Maybe some Python code for modifying yaml/json/ini files or reading therefrom.
The main project will reside in a Github repository. It will be fine for users to clone it from there and start a build script locally.
My idea is to write a Bash shell script that
creates a conda environment based on requirements.yaml for everything that can be done this way
installs other parts using CLI commands (wget/curl etc.)
does all necessary modifications using CLI commands, maybe using a few short Python scripts (e.g. for changing or reading JSON or yaml files).
My local usage will be on OSX Big Sur, Linux should be supported, Windows compatibility would be nice-to-have.
Before I start:
Is this approach viable? I think it will be pretty transparent, but of course also a bit proprietary.
Docker is likely overkill for my purpose, and I also read that the execution will be slow on OSX.
The same environment will likely be installed multiple times on the same users' machine, so it is important that I can control e.g. the usage of existing packages and files via aliases or symlinks. It is not important that the multiple installations are decoupled for the non-python/non-conda parts (e.g. atom, node.js, mermaid could be the same binaries for all installations; just the set of Python packages might vary by installation).
Thanks for your expertise!

How can I try a Cygwin package that I just created?

I'm a maintainer of a program that I'd might like to propose for inclusion in the Cygwin distribution.
We use CMake so there is a packager available, and it's easy to create a .bz2 package.
Once I've created the package, how can I try it locally? In Linux this can easily be done, but is there a way to use the Cygwin package installer so that it picks up a local package?
I've read the package contribution documentation and related pages but can't find an answer.
The CMake Cygwin package generator seems extremely out-of-date. Cygwin hasn't used .bz2 for some years. This is from a Cygwin-mailing list answer from Adam Dinwoodie:
Cygwin packages generally use Cygport to define the build process and
so forth. It's more-or-less the equivalent of rpmbuild for RPM
packages, and similar tools for other distribution systems. The
documentation for Cygport is at http://cygwinports.github.io/cygport/;
if you're using make in a reasonably standard way, most things should
Just Work™.
In particular, if you're using Cygport, it'll automatically do things
like creating setup.hint files for you.
For testing locally, I find it's simplest to just do tar -xaC/ -f
<tarball> on the compiled tarballs that Cygport generates. That
doesn't test the dependency management or anything that requires
post-install scripts, but it's fine for checking the installation
itself works.

Why do techs recommend YUM installs yet repositories and providers are ages behind?

I have been reading page after page after page about the benefits of using YUM package installer and how NOBODY should built installs from source files (which again makes no sense to me) yet the repositories and source builders always package files in Tarball format, leaving a TON of work (which usually ends up going wrong) to the individual instead of formatting SRPMs for the end user.
Has the world gone mad? I feel like I am taking crazy pills!
Well, first of all there's more to life than just RPM and YUM. An SRPM would be (somewhat) useless to Debian, for instance.
As for why you'd use a package repository over building everything yourself, well I don't know about you, but I've much rather just run (I'm using Ubuntu so I have apt-get instead of yum):
# apt-get install firefox
Than trying to figure out all the dependencies, as well as all the dependencies dependencies, make sure I have the correct versions of everything, download/build/install any that I don't have (or are out of date: if updating existing dependencies, make sure the newer versions don't break any existing software that I have and make sure I don't end up with 15 different versions of the same thing), and only after all that then download/configure/build/install firefox.
Then realise I'll also want Open Office or MySQL and start all over again!
That said, there are some packages that I install the latest version of from source. For example, I run my media centre off MythTV and I always like to build the latest version of that from Subversion. But even then, with a package manager, that's as easy as:
# apt-get build-dep mythtv
> cd ~/src/mythtv/
> svn co <svn repo of mythtv>
> configure && (etc)
That is, the package management software already knows all the dependencies for MythTV and it can download and install them automatically. Why spend hours tracking it all down manually?
In the end, it sounds to me like maybe you'd prefer a distro like Gentoo... that's the benefit of Linux, of course. If you don't like how things are run in the Fedora/RedHat distribution, you can just choose a different one.
There are a few reasons to use a packaging infrastructure (like yum):
Creating "installations" is much easier to do, due to automatic dependency installation. From the simple yum install blah to creating chroots with mock/--installroot, or live CDs, etc.
Managing those installations. From the obvious yum update to operations which are much harder to do otherwise like: yum --security update, yum --bz=1234 update-minimal, yum --disablerepo=testing distro-sync.
Auditing those installations. The obvious examples here being yum history (not available in plain RHEL-5 atm.) and yum verify.
...however speed is not a factor, for instance Fedora rawhide moves as fast as gentoo.
RHEL-5 does not move that quickly, because it's 3 years old and is not supposed to break ... not because it's managed using yum/rpms. There are third party providers, like iuscommunity, which release co-installable newer releases for various packages. Or if you need to you create your own.
Or you can run a production server on Fedora rawhide or gentoo, both will have the latest packages really quickly ... I would not recommend that option though.
Among other things, tarballs are system independent and YUM appears to be RPM-based and thus mostly usable by Linux only (plus Netware and AIX, so as I said, Linux only :) )