Why do techs recommend YUM installs yet repositories and providers are ages behind? - ssh

I have been reading page after page after page about the benefits of using YUM package installer and how NOBODY should built installs from source files (which again makes no sense to me) yet the repositories and source builders always package files in Tarball format, leaving a TON of work (which usually ends up going wrong) to the individual instead of formatting SRPMs for the end user.
Has the world gone mad? I feel like I am taking crazy pills!

Well, first of all there's more to life than just RPM and YUM. An SRPM would be (somewhat) useless to Debian, for instance.
As for why you'd use a package repository over building everything yourself, well I don't know about you, but I've much rather just run (I'm using Ubuntu so I have apt-get instead of yum):
# apt-get install firefox
Than trying to figure out all the dependencies, as well as all the dependencies dependencies, make sure I have the correct versions of everything, download/build/install any that I don't have (or are out of date: if updating existing dependencies, make sure the newer versions don't break any existing software that I have and make sure I don't end up with 15 different versions of the same thing), and only after all that then download/configure/build/install firefox.
Then realise I'll also want Open Office or MySQL and start all over again!
That said, there are some packages that I install the latest version of from source. For example, I run my media centre off MythTV and I always like to build the latest version of that from Subversion. But even then, with a package manager, that's as easy as:
# apt-get build-dep mythtv
> cd ~/src/mythtv/
> svn co <svn repo of mythtv>
> configure && (etc)
That is, the package management software already knows all the dependencies for MythTV and it can download and install them automatically. Why spend hours tracking it all down manually?
In the end, it sounds to me like maybe you'd prefer a distro like Gentoo... that's the benefit of Linux, of course. If you don't like how things are run in the Fedora/RedHat distribution, you can just choose a different one.

There are a few reasons to use a packaging infrastructure (like yum):
Creating "installations" is much easier to do, due to automatic dependency installation. From the simple yum install blah to creating chroots with mock/--installroot, or live CDs, etc.
Managing those installations. From the obvious yum update to operations which are much harder to do otherwise like: yum --security update, yum --bz=1234 update-minimal, yum --disablerepo=testing distro-sync.
Auditing those installations. The obvious examples here being yum history (not available in plain RHEL-5 atm.) and yum verify.
...however speed is not a factor, for instance Fedora rawhide moves as fast as gentoo.
RHEL-5 does not move that quickly, because it's 3 years old and is not supposed to break ... not because it's managed using yum/rpms. There are third party providers, like iuscommunity, which release co-installable newer releases for various packages. Or if you need to you create your own.
Or you can run a production server on Fedora rawhide or gentoo, both will have the latest packages really quickly ... I would not recommend that option though.

Among other things, tarballs are system independent and YUM appears to be RPM-based and thus mostly usable by Linux only (plus Netware and AIX, so as I said, Linux only :) )

Related

Difference apache installation via apt-get install and configure, make, make install

recently I tried to install an apache webserver on an ubuntu machine. I discovered two ways to do that.
The first one is to install it like it is described at this source apache docs using ./configure, make, make install to the downloaded apache sources.
And the second way is to install it via apt-get update && apt-get install apache2.
I also noticed that when I run the installation via apt-get it seems that there is a different configuration of the apache for example the directory /etc/apache2/* is only available over this way of installation. So when I install it manually the directories sites-available, sites-enabled, ... are just missing.
Is there also a way to get these folders when running the manual installation?
Where do this differences come from?
This is nothing that can be summed up in a few sentences. You basically ask: how does software management work under Linux systems? In short:
using the apt utility on a Ubuntu based system you are using the systems package management system, that should nearly always be what you want. That way the system can take care to keep your apache installation up to date, you can remove software again without leaving artefacts, the software is guaranteed to work with the system libraries already installed in your system. Potential conflicts are resolved. You can be certain that the configuration matches your system.
using the build system is a more generic (archaic) way: you do not only install the software, you build it from scratch from the software sources prior to installing it. That is only possible for OpenSource software, obviously. This certainly allows for more flexibility. But you are responsible yourself for a whole lot of things, starting with first setting up a complete build system, then configuring the package, select what you actually want to build and last but not least you are yourself responsible to update software you installed that way. This rarely is a good idea, with two exceptions:
you absolutely cannot find a pre build package for your Linux distribution or
you want to make own modifications to the software itself
The difference in the folder layout of the apache configuration is a separate thing. There are two aspects that come into play here:
you can change that layout using the uncounted build options offered by the build system (namely the options you can hand over to the configure utility on the command line).
typically the configuration of complex software packages (like the apache http server with all it's modules) is broken up into
various sub folders so that you can keep an overview and
allows additional packages to drop their additional configuration in place (this is mostly useful for prebuild packages)
Long story short: in 99,8% of all cases you want to use prebuild packages prepared for your distribution. That is the power of the software management systems under Linux (that still have no comparable counterpart in other operating systems).

PHPUnit: local VS global install

Installing PHPUnit with composer globally seems more convenient to me for those two reasons:
1. Using it everywhere without needing an extra install.
2. Just running phpunitinstead of vendor/bin/phpunit (using an alias might solve this)
Are there any reasons why a local install might be the better choice? For example: using the exact same versions every time. (don't have a lot of experience with PHPUnit, so not sure if this really is an issue or not)
The big disadvantage of installing packages globally is that you might end up with different versions of PHPUnit between developers in your team (unless you are the only developer). This might cause some side effects.
If you install it locally using composer.json, then every developer in your team will have exactly the same version as you do for that specific application. Also, everybody will see when you change the version in composer.json.
If you don't like typing vendor/bin/phpunit, you can use Makefile (which is also in your project):
test:
vendor/bin/phpunit --configuration=test/Unit/phpunit.xml
then run it ...
make test
I like to install it via composer and the require-dev block, but another way that does come highly recommended is to download the phpunit.phar into the project, to use that.
Either way, you control exactly which version is being used (and when it's updated) - which is the most important part, as you can't so easily control what people have installed globally.

How can I simplify my stack of package managers?

I don't know how it got this bad. I'm a web developer, and I use Ubuntu, and here are just some of the package managers I'm using.
apt-get for system-wide packages
npm for node packages
pip for python packages
pip3 for python 3 packages
cabal for haskell packages
composer for php packages
bower for front-end packages
gem for ruby packages
git for other things
When I start a new project on a new VM, I have to install seemingly a dozen package managers from a dozen different places, and use them all to create a development environment. This is just getting out of control.
I've discovered that I can basically avoid installing and using pip/pip3 just by installing python packages from apt, like sudo apt-get install python3-some-library. This saves from having to use one package manager. That's awesome. But then I'm stuck with the Ubuntu versions of those packages, which are often really old.
What I'm wondering is, is there a meta-package manager that can help me to replace a few of these parts, so my dev environment is not so tricky to replicate?
I had a thought to make a package manager to rule them all for that very reason. Never finished it though, too much effort required to stay compatible. For each package manager you have a huge community supporting it's upkeep.
Best advice I have is to try to reduce your toolchain for each type of project. Ideally you shouldn't need to work in every language you know for each project you work on. How many projects are you using that use both python 2 and python 3 simultaneously?
Keep using apt for your system packages and install git with it. From there try to stick to one language per project. AFAIK all of the package managers you listed support installing packages from git. The languages you mentioned all have comparable sets of tooling, so use the toolchain available for the target language.
I worked with a team that was using composer, npm, bower, bundler, maven, and a tar.gz file for frontend SPAs because those are the tools they knew. On top of all of that, they were using vagrant simply as a deployer. We considered our toolchain and described our need and realized that it could be expressed in a single language once we adopted appropriate tooling for the task at hand.

AIX apache rpm dependencies

I am evaluating the Crowd SSO by Atlassian. Now to get apache to use Crowd for authentication, there is a connector available by the vendor.
Problem
Unfortunately they do not provide anything for my OS (AIX). Instead they provide source code with instructions. Now the example here uses yum -y install autoconf automake gcc httpd-devel libcurl-devel libtool libxml2-devel mod_dav_svn subversion-devel to download the required packages for which there is no alternate in AIX (AFAIK).
So I went to the AIX toolbox and got some packages. For the rest, I took Mr Perzl's help. And while installing the rpms ended up getting dependency errors.
Question
Do I go with
The solution given here dependency hell.
IBM way
Something else which Google and my limited exposure to AIX are not telling me.
I am not *nix expert, rather at basic user level. And any installations are actually done by the admins. I need expert advice so as to get it right and efficiently if possible.
Appreciate if someone would like to retag this question for getting attention from the right people.
It has been a while since I struggled with AIX and Linux, and have success with the Crowd Connector on Linux. So, having taken a look at both links, I would say:
The IBM documentation is only for the packages supplied with their Toolbox and there is a risk that if you use it for other things, you may end up with a dead-end as the utilities may refuse to play ball.
With Mr. Perzl's way, you are building it brick by brick, with known certainty. The main risk is that the right versions may not be available and/or one of the build tools may not work. In that case, you may still have to tweak the source and/or the build/make files to compile properly, but it will eventually work.
Once you have a compiled plugin and it works with a certain version of Apache, you will not need many of the dependencies, so the instructions you give to the admins to deploy will be minimal. Most likely, the runtime dependencies will be mod_dav_svn, curl and libxml
Please post an update when you get it working.

Playframework: Upgrade process -- Best Practices

I'd very much appreciate anyone sharing best-practices, patterns, anti-patterns, backup, rollback processes that you have formulated for a pain-free, foolproof, Play framework upgrade.
I'm thinking just replacing the bin/play directory with the latest version can cause problems
Edit:
I'm looking for more specific version management strategies, say,
a) Do you just have /bin/play directory having the latest play version or
b) Do you keep versions like /bin/play-1.1 /bin/play-1.2 and change your $PATH to point to the latest (cons: you have to rebuild your modules, dependencies & libs; pros: gives better control over rollback)
I prefer to install play from source using git:
git clone git://github.com/playframework/play.git
cd play
# checkout specific version
git checkout 1.2.1
cd framework
ant
cd ..
ln -s $PWD/play ~/bin
So I have a full install including all source. Later, when play was updated to version 1.2.2 I did the following:
cd <play_home>
git pull
git checkout 1.2.2
cd framework
ant
In your application you then do
play clean && play run
The advantage of running play from a source build is that you can always and easily roll back to the previous version or even test out features from current development. This does not solve the problem of having multiple versions of play active at the same time though.
I agree with Andre. However, if you are asking for best practice for a live project, I would do it differently.
You can have multiple version installed on your local machine. The only thing you have to change is which one is visible in the path. For instance you could have 1.1, 1.2, 2.0 and depending on which one you want, you just modify your /home/youruser/.bashrc file.
The reason, why simple update of play from git or hg will not work/good idea is because, incase there are problems, you have to revert, rollback modules, or goodness know what not.
It is far better to simple swap out the play version, rebuild, test extensively, once you are ok that everything is good, then you can make the same changes in a live site.
If things don't workout, or your are hopelessly lost, all you have to do is revert the changes to your project and switch the play version. You will be back to where you started.