Why does the building from Binary files do not require Root access? - sysadmin

When I am in my dept's server, I cannot use commands such as "apt-get install nethack". I have to build the nethack from Binary files to get it working, at least so I have been told. I cannot understand the reason. Why do I need to build things from binaries? Why is the use of the commands, such as "apt-get", forbidden? Why do I not need Root access to build from binaries?

apt-get is a system-level command that installs packages for all users.
If you download and compile, you are only creating local "copies" of the binaries, not system-wide. If you tried to complete the install process with make install this would most likely fail because you do not have sufficient privileges to install the program for all users' access (same reason you can't run apt-get install)

When you compile a program from source, you can give it the '--prefix=~/'. This causes it to install relative to your own home directory (so binary programs typically end up in '~/bin', man pages in '~/man' etc). This poses no problems because you already have permission to write here.
Apt-get on the other hand installs the packages in the global filesystem ('/bin/', '/usr/bin/', etc), which can impact other users and so, quite rightly, require administrative access.
If you want to install some program you can use the command
apt-get source app-name
This will work even if you are not root since it only fetch the source code to the app-name and put it in the current directory, which is easier than having to track down the source and there is a better chance to get it work, since you download the version that should work on your system.
Alternatively you should bug your sysadmin to install the programs you need, since it is his job (and if you need them, chances are that the rest of your team does too).

Because apt-get will install a program system wide.

The locations to which apt-get writes installed files (/bin, /usr/bin, ...) are restricted to root access. I imagine that when you build from source you're not executing the install step of the bulid. You're going to need to set a prefix for the installation such that the packages end up somewhere you can write. This thread talks a bit about setting prefixes for apt-get and you'll probably want to set your prefix to something like
~/software/
and then add the resulting bin directories to your PATH.

Related

Singularity Recipe: How to access executable within container?

I am a beginner with Singularity.
What I want to achieve in the long run: I have a programming project with a long lists of dependencies, and I want to be able to give the program to other people in my company without there being bugs caused by missing dependencies, or wrong versions of dependencies.
The idea was now to use Singularity in order to easily provide a working environment.
In order to test this, I wrote a Hello World application which I now want to run in a container. I have a folder HelloWorld/ which contains the source code for a C++ Qt project. Then I wrote the following recipe file:
project.recipe
Bootstrap: docker
From: ubuntu:18.04
%setup
cp -R <some_folder>/HelloWorld ${SINGULARITY_ROOTFS}/HelloWorld
%post
apt update
apt-get install -y qt5-default
apt install -y g++
apt-get install -y build-essential
cd HelloWorld
qmake
make
echo "after build:"
ls
%runscript
echo "before execution:"
ls HelloWorld/
./HelloWorld/HelloWorld
where the echos and directory listings are for my current debugging process.
I can sucessfully build an image file using sudo singularity build --writable project.img project.recipe. (My debugging output shows me that the executable was build successfully.)
The problem is now that if I try to run it using ./project.img, or singularity run project.img, it won't find the executable.
Using my debugging output, I found out that the lines in %runscript use the folders outside of the container.
Tutorials like https://sylabs.io/guides/3.1/user-guide/build_a_container.html made it seem to me as if my recipe was the way to go, but apparently it isn't?
My questions:
Is there some way for me to access my executable? Am I calling it wrong?
Is the way I do it the way it is supposed to be done? Or would one normally do something like getting the executable outside of the container and then use the container to call that outside file? Or is there a different best practice?
If the executable is to be copied outside of the container after compilation, how do I do that? How do I access outside folders when I'm within %post?
Is this the best work process for what I want to achieve? Later on, my idea is that the big project is copied likewise in the container, dependencies are either installed or copied, then the project is compiled and finally its source being deleted. I also considered using a repository, but I can't have the project being in an open repository, and I don't want to store any passwords.
Firstly, use %files, don't use %setup. %setup is run as root and can directly modify the host server. You can very easily and accidentally break things without realizing it. You can get the same effect this way:
%files
some_folder/HelloWorld /HelloWorld
You are calling it wrong. In your %setup (and hopefully now in your %files) steps, you are copying the data to /HelloWorld. In your %runscript your are calling ./HelloWorld/HelloWorld which is the equivalent of $PWD/HelloWorld/HelloWorld. Since singularity automatically mounts in $PWD (as well as $HOME and some other directories), you are not calling what you're trying to call.
You don't copy the executable outside of the container, you just need to make sure what you're executing is where you think it is.
There is no access to the host filesystem in %post, you should have everything you need copied in via %files first.
That's a reasonable workflow. Having a local private repo for the code is probably a good idea for tracking your changes, but that's your call.

Getting an installed module to recognize changes to config files

I have a package that uses config.json for some settings it uses. I keep the package locally rather than installing it from CPAN. My problem is when I make changes to config.json, the package doesn't recognize the changes since the config file's cached elsewhere, forcing me to run zef install --force-install or delete precomp. How can I ensure that the package always recognizes updates to the config file?
When you install packages using zef, it keeps them in the filesystem, but their names are converted into sha1, something like
/home/jmerelo/.rakudobrew/moar-2018.03/install/share/perl6/site/sources/81436475BD18D66BFD96BBCEE07CCCDC0F368879
zef keeps track of them, however, and you can locate them using zef locate, for instance:
zef locate lib/Zef/CLI.pm6
You can run that from a program, for instance this way:
sub MAIN( Str $file ) {
my $location = qqx/zef locate $file/;
my $sha1 = ($location ~~ /\s+ \=\> \s+ (.+)/);
say "$file → $sha1[0]";
}
which will return pretty much the same, except it will give you the first location of the file you give it at the command line:
lib/Zef/CLI.pm6 → /home/jmerelo/.rakudobrew/moar-2018.03/install/share/perl6/site/sources/81436475BD18D66BFD96BBCEE07CCCDC0F368879
You probably need to install your config.json file in a resources directory (which is the preferred location) and then use something like that.
That said, probably actually installing a module you're testing is not the best strategy. If you're still testing things, it's probably better if you just keep it in the directory you're working with and use perl6 -I<that directory> or else use lib <that directory> is probably a better option. You can just delete that when you release, or keep it, since that only adds another directory to the search path and will not harm the released module.

Missing var directory after Apache httpd installation

I installed apache httpd on my linux vm and wanted to start it's service. BUt I'm getting error (13)Permission denied. Error retrieving pid file run/httpd.pid I realised that I do not have this file. Not even a var directory. Any solutions for this? Pardon me, this is my first time touching servers.
I installed the apache like this:
gzip -d httpd-2.2.21.tar.gz
tar xvf httpd-2.2.21.tar
./configure --prefix=/home/Hend/Desktop/Server
make
make install
You have several alternatives for this:
Install apache in user directory, run as non-root user
This is the way you started doing that. But then you'll have to:
Add some customizations to the start script, or at least pass it enough environment variables to tell him where configuration / pidfile / etc. are
Modify the whole apache configuration, since the default uses directories to which you don't have access. For example, you should put your DocumentRoot somewhere else thant /var/www
Run the server on a non-standard port. Since unprivileged users cannot run services on ports lower than 1024, you must run apache on another port, such as 8000 instead of 80. But this way, all your URLs will look like http://example.com:8000 instead of http://example.com.
Install apache from sources, into /usr/local
You can install apache in the default path for non-part-of-distro stuff, that is /usr/local instead of /usr/. That is, use --prefix=/usr/local/ when running configure.
This way, things should be much simpler. In any case, you'll have to run the webserver from root, and configure it to change user only after socket is opened.
Install apache from sources, into /usr/
You can also install apache in its default location, usint --prefix=/usr/. This way things should be much simpler, it should install init scripts in the usual location /etc/init.d/apache2 or /etc/init.d/httpd, configuration in /etc/apache2 etc.
Beware that doing this all the apache installed files will conflict with the ones of the version provided by your linux distribution!
Install apache from your distribution package manager
A part from the case in which you want particular setups (for example with non-standard patches), particular non-packaged versions (not recommended, since usually versions packaged with distros are guaranteed to be stable, others are not).
Benefits of doing this:
Avoid huge setup + configuration process to make it work
Versions from your distro should be "guaranteed" to be stable and tested with all the other programs shipped with it. Not always the latest version is better.
Each time a new release is updated (or more importantly, there is a security update), you can upgrade it semi-automatically by running a single upgrade command, without worrying about things going wrong during update.
This way the whole installation is just matter of a couple commands.
For example, on debian:
apt-get install apache2
On fedora:
yum install httpd
etc.
Then, if the service is not already started by package manager, you can start it with
/etc/init.d/apache2 start
or
/etc/init.d/httpd start
Job done. Now just put stuff in /var/www/ (or equivalent directory, depends on distro) and see it through you web server.
You have to start apache as root
Have you read the docs in the source distribution?
ie INSTALL
less INSTALL
For complete installation documentation, see [ht]docs/manual/install.html or
http://httpd.apache.org/docs/2.2/install.html
$ ./configure --prefix=PREFIX
$ make
$ make install
$ PREFIX/bin/apachectl start

How does RVM work in production with no users logged in?

Considering putting RVM into production (light duty) on a new machine. But I'm not visualizing how it will work if a user isn't logged in. RVM has been installed into /usr/local/rvm/bin/rvm so it is available to "everyone".
If server restarts and is at login screen and background daemons are serving apache/rails, etc. and no .bashrc, etc. have loaded...how/where do we specify which of RVM's Rubies to load?
Perhaps somewhere in Phusion's Passenger?
who manages these gemsets? are they shared?
You can use RVM's wrapper command to generate scripts that load up the correct RVM environment before executing the necessary binaries. The format is:
rvm wrapper [ruby_string] [wrapper_prefix] [binary[ binary[ ...]]]
For example, to create a binary named system_unicorn that loads ruby-1.9.2-p180 and then executes unicorn, use the following:
rvm wrapper ruby-1.9.2-p180 system unicorn
You can pass multiple binaries to create wrappers for. For example, to create wrappers for both unicorn and god, run
rvm wrapper ruby-1.9.2-p180 system unicorn god
ruby_string can be anything you can pass to rvm use, and thus can contain gemsets as well; for example, to create myapp_unicorn for the gemset my_app_gemset, use:
rvm wrapper ruby-1.9.2-p180#my_app_gemset myapp unicorn
When you install Passenger these days, it automatically creates a wrapper for it's ruby (pretty sure it calls it passenger_ruby) that loads up the correct version of Ruby (the one you're using when you install it). You can use config/setup_load_paths.rb to specify a gemset--see this Stack Overflow answer.
I've dealt with this in the past by running all jobs with a user that has rvm set up. It does add complexity to many simple jobs because you have to make sure that rvm is loaded. If you need to run commands as root AND use rvm, you can use the rvmsudo command.
You can also install RVM systemwide as root:
https://rvm.beginrescueend.com/rvm/install/
1) root installs the ruby versions and gems under RVM if you install them globally
(read the RVM readme -- there seem to be possible problems when installing globally!)
2) if you're on UNIX, each of your system processes gets started as a particular user,
e.g. on LINUX through the init-scripts in /etc/init.d/ ... while the processes are
created as a particular user, the mapping of user names to UID/GID, the home directory,
and login-shell are looked up in the /etc/passwd file
-- that is where the login shell (e.g. bash) is defined for a particular user.
So, coming back to your statement:
If server restarts and is at login screen and background daemons are serving apache/rails, etc.
and no .bashrc, etc. have loaded...how/where do we specify which of RVM's Rubies to load?
You see the problem with that statement?
When the server starts, and the background processes are started, each of them gets started as a particular user, with a particular login shell and with a particular home directory.
RVM will need that you have your login shell set to /bin/bash -- otherwise it couldn't set up the RVM environment for any of the processes that are run by that particular user. e.g. RVM will not work if you use /bin/nologin as the default shell.
Problem1:
That's a security problem of course! In general, daemons should not have a shell set for security reasons.
Problem2:
You don't want to make high-powered tools available to somebody breaking into your server - that's why you shouldn't have cc and other tools on a production server - that's why you should not compile your Rubys and Gems on the production server, but rather copy the .rvm directory onto the production servers...
Problem3: (more general)
The way RVM manages all it's Ruby and Gem versions is very very kludgy approach to version management..
Using special features of one particular login shell to facilitate the version management is not a good idea IMHO - sure there is nothing better at the moment, but in the old days, the idea behind Lude was a far better approach to install different versions of software:
http://www.iro.umontreal.ca/contrib/lude/lude2_toc.html
Conclusion:
So as I mentioned in a previous post, I highly recommend to set up RVM a normal user account to run your Ruby and Rails processes and to set up that one account with /bin/bash as the login shell and to copy your .rvm directory from your dev-server to your production machines via scp or rsync -- it's the better and safer approach.
I had a similar problem, where I want to deploy the ruby version and all associated gems to the production machines...
Because of the reasons outlined in my other post, I chose to go with a local RVM install. I have a user "deploy" on both my dev-servers and my production servers.
I would highly recommend that you use either "rsync' or 'scp -rp' to copy the complete subdirectory ~/.rvm to the target machine (remember that you don't want to have cc and other tools on a production server!)
One important Gotcha:
be sure that you use the identically named user account on all machines, if you replicate the .rvm directory!
I noticed that the internal book-keeping of RVM keeps track of some environment variables during installation of Ruby versions and gems, and that it keeps track in particular of the name of the user account that was used, and the path to the users's home directory. Beats me why they don't use $HOME and $USER , which are standard on all UNIXes.. seems like a real bug in RVM to me.
if you use the same user account for all machines, it will work just fine.
e.g. I use a user "deploy" which has the .rvm directory and which owns the running processes.
UPDATE
if you use rsync or scp to sync your deployment account, the downside is that you need to restart your servers, e.g. unicorn, manually.
Another promising way of deploying RVM and Rails apps is to deploy to one machine, where bundle update is run, and then create an RPM out of the whole deployment account, which is then installed via rpm -Uhv or a private yum repository onto all the nodes. Advantage here is that the services on the nodes can be easily restarted via a %post action in the RPM.

Can you compile Apache HTTP Server and redeploy its binaries to a different location?

As part of our product release we ship Apache HTTP Server binaries that we have compiled on our (UNIX) development machine.
We tell our clients to install the binaries (on their UNIX servers) under the same directory structure that we compiled it under. For some clients this is not appropriate, e.g. where there are restrictions on where they can install software on their servers and they don't want to compile Apache themselves.
Is there a way of compiling Apache HTTP Server so its installation location(s) can be specified dynamically using environment variables ?
I spent a few days trying to sort this out and couldn't find a way to do it. It led me to believe that the Apache binaries were hard coding some directory paths at compilation preventing the portability we require.
Has anyone managed to do this ?
I think the way to do(get around) this problem is to develop a "./configure && make" script that your client uses to install, specify and compile the binaries. That would offcourse require that the client has all the source-code installed on his server or you can make it available on an NFS share.
If you are compiling Apache2 for a particular location but want your clients to be able to install it somewhere else (and I'm assuming they have the same architecture and OS as your build machine) then you can do it but the apachectl script will need some after-market hacking.
I just tested these steps:
Unpacked the Apache2 source (this should work with Apache 1.3 as well though) and ran ./configure --prefix=/opt/apache2
Ran make then sudo make install to install on the build machine.
Switch to the install directory (/opt/apache2) and tar and gzip up the binaries and config files. I used cd /opt/apache2; sudo tar cf - apache2 | gzip -c > ~/apache2.tar.gz
Move the tar file to the target machine. I decided to install in /opt/mynewdir/dan/apache2 to test. So basically, your clients can't use rpm or anything like that -- unless you know how to make that relocatable (I don't :-) ).
Anyway, your client's conf/httpd.conf file will be full of hard-coded absolute paths -- they can just change these to whatever they need. The apachectl script also has hard coded paths. It's just a shell script so you can hack it or give them a sed script to convert the old paths from your build machine to the new path on your clients.
I skipped all that hackery and just ran ./bin/httpd -f /opt/mynewdir/dan/conf/httpd.conf :-)
Hope that helps. Let us know any error messages you get if it's not working for you.
I think the way to do(get around) this problem is to develop a "./configure && make" script that your client uses to install, specify and compile the binaries. That would offcourse require that the client has all the source-code installed on his server or you can make it available on an NFS share.
Not to mention a complete build toolchain. These days, GCC doesn't come default with most major distributions. Wouldn't it be sane to force the client to install it to /opt/my_apache2/ or something like that?
#Hissohathair
I suggest 1 change to #Hissohathair's answer.
6). ./bin/httpd -d <server path> (although it can be overridden in the config file)
In apacheclt there is a variable for HTTPD where you could override to use it.