Why do we have to create a virtual environment to get the package.
You should always use some form of virtual environment for each project. Different libraries have different dependencies at specific version ranges for other libraries. So when you have a combination of packages you should use something to keep track of all those versions.
This might not feel like a problem when you're first starting out, but over time you will have older projects and projects with very different dependency sets. Then you won't want your base environment all messed up with version conflicts.
Two good options:
Use pyenv to switch between Python versions and Poetry to manage virtual environments & dependencies (my personal favourite)
Use anaconda which does all three (python versions, handles dependency conflicts, and has virtual environments)
Related
recently I tried to install an apache webserver on an ubuntu machine. I discovered two ways to do that.
The first one is to install it like it is described at this source apache docs using ./configure, make, make install to the downloaded apache sources.
And the second way is to install it via apt-get update && apt-get install apache2.
I also noticed that when I run the installation via apt-get it seems that there is a different configuration of the apache for example the directory /etc/apache2/* is only available over this way of installation. So when I install it manually the directories sites-available, sites-enabled, ... are just missing.
Is there also a way to get these folders when running the manual installation?
Where do this differences come from?
This is nothing that can be summed up in a few sentences. You basically ask: how does software management work under Linux systems? In short:
using the apt utility on a Ubuntu based system you are using the systems package management system, that should nearly always be what you want. That way the system can take care to keep your apache installation up to date, you can remove software again without leaving artefacts, the software is guaranteed to work with the system libraries already installed in your system. Potential conflicts are resolved. You can be certain that the configuration matches your system.
using the build system is a more generic (archaic) way: you do not only install the software, you build it from scratch from the software sources prior to installing it. That is only possible for OpenSource software, obviously. This certainly allows for more flexibility. But you are responsible yourself for a whole lot of things, starting with first setting up a complete build system, then configuring the package, select what you actually want to build and last but not least you are yourself responsible to update software you installed that way. This rarely is a good idea, with two exceptions:
you absolutely cannot find a pre build package for your Linux distribution or
you want to make own modifications to the software itself
The difference in the folder layout of the apache configuration is a separate thing. There are two aspects that come into play here:
you can change that layout using the uncounted build options offered by the build system (namely the options you can hand over to the configure utility on the command line).
typically the configuration of complex software packages (like the apache http server with all it's modules) is broken up into
various sub folders so that you can keep an overview and
allows additional packages to drop their additional configuration in place (this is mostly useful for prebuild packages)
Long story short: in 99,8% of all cases you want to use prebuild packages prepared for your distribution. That is the power of the software management systems under Linux (that still have no comparable counterpart in other operating systems).
my company deployed a conan package at their artifactory server. There are several versions of this package available for different configurations. Let's say there are two versions, one for Ubuntu and one for Debian. The urls of these versions are looking like this:
https://artifactory.<my-company>/artifactory/<the-project-depending-on-the-conan-package>/_/<name-of-the-package>/<version-number-of-the-package>/_/0/package/<THE-HASH-FOR-UBUNTU-VERSION>/0/conan_package.tgz
https://artifactory.<my-company>/artifactory/<the-project-depending-on-the-conan-package>/_/<name-of-the-package>/<version-number-of-the-package>/_/0/package/<THE-HASH-FOR-DEBIAN-VERSION>/0/conan_package.tgz
When we build a project, which depends on this package, we need to download the fitting version of it (Ubuntu or Debian). Unfortunately, these downloads need to happen during the build (we use cmake).
Now my question: As you can see, the urls contain the hash of the package versions. But when I build the project, how do I now which hash is for the Ubuntu or the Debian version? Obviously I need to distinguish between the two hashes to download the fitting package-version.
Note: Please assume that my conan-cache is empty.
I hope you guys can help my and please correct my if I there are any missunderstandings (I am new to cmake, conan and artifactory).
I am designing an environment for productive research, i.e. writing, data-analysis, publication, etc.
In order to share the final results with others, I need to find a way to package this and to set up the local installation.
The project depends on Anaconda, so conda as a package manager is available.
It also includes
Pandoc and some pandoc packages, some will have to be fetched from Github directly because some versions are not available via conda-forge (doable in conda)
Atom and Atom packages; they should be installed and configured by my script (this works on the CLI via the apm package manager)
Node.js and Mermaid and a few other JS packages, which require npm calls
Some file-system-level operations, like deleting parts from packages where I only need a portion from, creating symlinks and aliases etc.
Maybe some Python code for modifying yaml/json/ini files or reading therefrom.
The main project will reside in a Github repository. It will be fine for users to clone it from there and start a build script locally.
My idea is to write a Bash shell script that
creates a conda environment based on requirements.yaml for everything that can be done this way
installs other parts using CLI commands (wget/curl etc.)
does all necessary modifications using CLI commands, maybe using a few short Python scripts (e.g. for changing or reading JSON or yaml files).
My local usage will be on OSX Big Sur, Linux should be supported, Windows compatibility would be nice-to-have.
Before I start:
Is this approach viable? I think it will be pretty transparent, but of course also a bit proprietary.
Docker is likely overkill for my purpose, and I also read that the execution will be slow on OSX.
The same environment will likely be installed multiple times on the same users' machine, so it is important that I can control e.g. the usage of existing packages and files via aliases or symlinks. It is not important that the multiple installations are decoupled for the non-python/non-conda parts (e.g. atom, node.js, mermaid could be the same binaries for all installations; just the set of Python packages might vary by installation).
Thanks for your expertise!
I don't know how it got this bad. I'm a web developer, and I use Ubuntu, and here are just some of the package managers I'm using.
apt-get for system-wide packages
npm for node packages
pip for python packages
pip3 for python 3 packages
cabal for haskell packages
composer for php packages
bower for front-end packages
gem for ruby packages
git for other things
When I start a new project on a new VM, I have to install seemingly a dozen package managers from a dozen different places, and use them all to create a development environment. This is just getting out of control.
I've discovered that I can basically avoid installing and using pip/pip3 just by installing python packages from apt, like sudo apt-get install python3-some-library. This saves from having to use one package manager. That's awesome. But then I'm stuck with the Ubuntu versions of those packages, which are often really old.
What I'm wondering is, is there a meta-package manager that can help me to replace a few of these parts, so my dev environment is not so tricky to replicate?
I had a thought to make a package manager to rule them all for that very reason. Never finished it though, too much effort required to stay compatible. For each package manager you have a huge community supporting it's upkeep.
Best advice I have is to try to reduce your toolchain for each type of project. Ideally you shouldn't need to work in every language you know for each project you work on. How many projects are you using that use both python 2 and python 3 simultaneously?
Keep using apt for your system packages and install git with it. From there try to stick to one language per project. AFAIK all of the package managers you listed support installing packages from git. The languages you mentioned all have comparable sets of tooling, so use the toolchain available for the target language.
I worked with a team that was using composer, npm, bower, bundler, maven, and a tar.gz file for frontend SPAs because those are the tools they knew. On top of all of that, they were using vagrant simply as a deployer. We considered our toolchain and described our need and realized that it could be expressed in a single language once we adopted appropriate tooling for the task at hand.
I would like to install a complete new version of Trac alongside of our current version (0.11.7) and I am looking for ways to do this. After some research, it says to use python's virtualenv, but I am trying to find specific steps on how to accomplish this without interfering with our 0.11.7 version at all.
I am using Ubuntu as the OS. Any input including any possible pitfalls is appreciated.
Try virtualenvwrapper that makes using python-virtualenv a breeze.
The steps to create and use such a Python virtual environment are explained in the user documentation. These environments form the core setup of my own Trac plugin development. It allows to even use custom python versions, if you ever need that. I found the need to give each environment a self-explaining name and use it with different Trac environment directories matching the db version required by different Trac versions, i.e. virtualenv "trac-0.11_py2.4" with Trac env "sandbox_0.11", "trac-0.12_py2.6" with Trac env "sandbox_0.12", etc.