How does RVM work in production with no users logged in? - ruby-on-rails-3

Considering putting RVM into production (light duty) on a new machine. But I'm not visualizing how it will work if a user isn't logged in. RVM has been installed into /usr/local/rvm/bin/rvm so it is available to "everyone".
If server restarts and is at login screen and background daemons are serving apache/rails, etc. and no .bashrc, etc. have loaded...how/where do we specify which of RVM's Rubies to load?
Perhaps somewhere in Phusion's Passenger?
who manages these gemsets? are they shared?

You can use RVM's wrapper command to generate scripts that load up the correct RVM environment before executing the necessary binaries. The format is:
rvm wrapper [ruby_string] [wrapper_prefix] [binary[ binary[ ...]]]
For example, to create a binary named system_unicorn that loads ruby-1.9.2-p180 and then executes unicorn, use the following:
rvm wrapper ruby-1.9.2-p180 system unicorn
You can pass multiple binaries to create wrappers for. For example, to create wrappers for both unicorn and god, run
rvm wrapper ruby-1.9.2-p180 system unicorn god
ruby_string can be anything you can pass to rvm use, and thus can contain gemsets as well; for example, to create myapp_unicorn for the gemset my_app_gemset, use:
rvm wrapper ruby-1.9.2-p180#my_app_gemset myapp unicorn
When you install Passenger these days, it automatically creates a wrapper for it's ruby (pretty sure it calls it passenger_ruby) that loads up the correct version of Ruby (the one you're using when you install it). You can use config/setup_load_paths.rb to specify a gemset--see this Stack Overflow answer.

I've dealt with this in the past by running all jobs with a user that has rvm set up. It does add complexity to many simple jobs because you have to make sure that rvm is loaded. If you need to run commands as root AND use rvm, you can use the rvmsudo command.
You can also install RVM systemwide as root:
https://rvm.beginrescueend.com/rvm/install/

1) root installs the ruby versions and gems under RVM if you install them globally
(read the RVM readme -- there seem to be possible problems when installing globally!)
2) if you're on UNIX, each of your system processes gets started as a particular user,
e.g. on LINUX through the init-scripts in /etc/init.d/ ... while the processes are
created as a particular user, the mapping of user names to UID/GID, the home directory,
and login-shell are looked up in the /etc/passwd file
-- that is where the login shell (e.g. bash) is defined for a particular user.
So, coming back to your statement:
If server restarts and is at login screen and background daemons are serving apache/rails, etc.
and no .bashrc, etc. have loaded...how/where do we specify which of RVM's Rubies to load?
You see the problem with that statement?
When the server starts, and the background processes are started, each of them gets started as a particular user, with a particular login shell and with a particular home directory.
RVM will need that you have your login shell set to /bin/bash -- otherwise it couldn't set up the RVM environment for any of the processes that are run by that particular user. e.g. RVM will not work if you use /bin/nologin as the default shell.
Problem1:
That's a security problem of course! In general, daemons should not have a shell set for security reasons.
Problem2:
You don't want to make high-powered tools available to somebody breaking into your server - that's why you shouldn't have cc and other tools on a production server - that's why you should not compile your Rubys and Gems on the production server, but rather copy the .rvm directory onto the production servers...
Problem3: (more general)
The way RVM manages all it's Ruby and Gem versions is very very kludgy approach to version management..
Using special features of one particular login shell to facilitate the version management is not a good idea IMHO - sure there is nothing better at the moment, but in the old days, the idea behind Lude was a far better approach to install different versions of software:
http://www.iro.umontreal.ca/contrib/lude/lude2_toc.html
Conclusion:
So as I mentioned in a previous post, I highly recommend to set up RVM a normal user account to run your Ruby and Rails processes and to set up that one account with /bin/bash as the login shell and to copy your .rvm directory from your dev-server to your production machines via scp or rsync -- it's the better and safer approach.

I had a similar problem, where I want to deploy the ruby version and all associated gems to the production machines...
Because of the reasons outlined in my other post, I chose to go with a local RVM install. I have a user "deploy" on both my dev-servers and my production servers.
I would highly recommend that you use either "rsync' or 'scp -rp' to copy the complete subdirectory ~/.rvm to the target machine (remember that you don't want to have cc and other tools on a production server!)
One important Gotcha:
be sure that you use the identically named user account on all machines, if you replicate the .rvm directory!
I noticed that the internal book-keeping of RVM keeps track of some environment variables during installation of Ruby versions and gems, and that it keeps track in particular of the name of the user account that was used, and the path to the users's home directory. Beats me why they don't use $HOME and $USER , which are standard on all UNIXes.. seems like a real bug in RVM to me.
if you use the same user account for all machines, it will work just fine.
e.g. I use a user "deploy" which has the .rvm directory and which owns the running processes.
UPDATE
if you use rsync or scp to sync your deployment account, the downside is that you need to restart your servers, e.g. unicorn, manually.
Another promising way of deploying RVM and Rails apps is to deploy to one machine, where bundle update is run, and then create an RPM out of the whole deployment account, which is then installed via rpm -Uhv or a private yum repository onto all the nodes. Advantage here is that the services on the nodes can be easily restarted via a %post action in the RPM.

Related

What is Chef doing when I use the `s3cmd` recipe?

I am using Chef and this s3cmd cookbook.
As this tutorial says I use knife to download and tar it. I actually made s3cmd work following the tutorial instructions, but I have problems understanding where exactly the installation of s3cmd is happening?
Can anyone explain to me what Chef is doing when using the s3cmd recipe?
When you run chef-solo locally (or chef-client in a server setup) you are telling Chef to compile a set of resources defined in your cookbooks recipes to then be applied to the node you are running the command on.
A resource defines the tasks and settings for something you want to setup.
The run_list set for the node defines what cookbook recipes will be compiled.
s3cmd
In your case, your run_list will have probably been recipe[s3cmd].
This instructs Chef to look in the s3cmd cookbook and as you didn't give a specific recipe, it loads s3cmd/recipes/default.rb.
If you gave a specific recipe, like recipe[s3_cmd::other] then Chef would load the file s3_cmd/recipes/other.rb.
Chef will compile all the resources defined in the recipe(s) into a list and the run through the list applying changes as required to your system.
What s3cmd::default does
First it installs some packages (via your distributions package manager)
python, python-setuptools, python-distutils-extra, python-dateutil, s3cmd
Note: This is entirely different to what the readme says about how s3cmd is installed! Always check!
Figures out where the config should go.
if node['s3cmd']['config_dir']
home_folder = node['s3cmd']['config_dir']
else
home_folder = node['etc']['passwd'][node['s3cmd']['user']]['dir']
end
Creates the .s3cfg config file from a template in the cookbook.
template "#{home_folder}/.s3cfg" do...
What else
Cookbooks can also define their own resources and providers to create more reusable cookbooks. The resource names and attributes will be defined in cookbook/resources/blah.rb. The code for each resource action will be in cookbook/providers/blah.rb
Code can be packaged in cookbook/libraries/blah.rb and included in other Ruby files.
Run chef-solo with the --log-level DEBUG option and step through the output. Try and identify the run list compilation phase, and then where everything is being applied.

Run Yard Server on Heroku

Is there a way to mount Yard (http://yardoc.org/guides/index.html) server on heroku ?
I did not find anything in the doc that explains how to do it.
Thanks a lot
This may have pitfalls I haven't uncovered yet (e.g. Yard caches its output files somewhere, given Heroku may often wipe the filesystem and re-slug it, you will lose the cache files and have to be regenerated), but it generally works and is very simple.
Create a new folder on your hard drive somewhere (I used ~/Sites/yard-on-heroku)
Create a new Gemfile in there, listing the gems you want to be available (if they aren't in the standard Heroku install). I used the following:
source 'https://rubygems.org'
gem 'sinatra'
gem 'rails'
gem 'yard'
Run bundle install to install the gems.
Create a file called Procfile and put the following in it:
web: yard server -p $PORT -g
Create a new git repository with git init
Commit your files to it (Gemfile*, Procfile)
Create a Heroku app with heroku create
Push your repo to Heroku with git push heroku master
And that's it. If you go to the Heroku URL given when you created the site in step 7, you'll see Yard running with all the gems available too it. If you want to specifically only show the gems listed in the Gemfile rather than all the Gems available by default including the ones in your Gemfile, then you can use -G instead of -g in the Procfile.
(my first ever answer on StackOverflow, so hope it's OK - any advice on improvements, gratefully received).
I wrote a nice tutorial with my solution to this problem here: http://benradler.com/blog/2014/05/27/deploy-yard-documentation-server-to-heroku/

How can I use two different ruby installations for same project with rvm and rvmrc files?

I have a an app that runs and is installed on JRuby in production. The same app can run in Ruby 1.8.7 as well in development. How can I use RVM to switch between these rubie?
I am looking for a .rvmrc-like solution so that I can say
rvm use .rvmrc_ruby
or
rvm use .rvmrc_jruby
to switch between Ruby versions. I usually need to do this to test the same app on both Ruby and JRuby.
I would like a solution where I can check-in such settings to Git and run these things without having to type the Ruby versions or gemset names everytime I need to switch.
generate those two files and in .rvmrc write:
source ./.rvmrc_${TEST_WITH:-jruby}
then you can write in your shell:
export TEST_WITH=ruby
cd .
and restore with:
unset TEST_WITH
cd .
This seems silly.
First, why are you even bothering to run a different Ruby in development? If this is for the occasional test run to ensure compatibility across different Rubies, then okay, but then…
Second, all you probably have in your .rvmrc is rvm use 1.8.7 or rvm use jruby—that is all that happens when your .rvmrc file runs. What's so bad about just actually typing that out into the terminal? It's actually less characters than the example commands you gave, and you get tab-completion too. If you need consistency across shells and actually have to have the .rvmrc reflect the current Ruby you want, then just change the file. Or, if you really must, write a simple script to do it for you (say it's called changervmrc.sh):
#!/bin/bash
echo "rvm use $1" > .rvmrc
and invoke with ./changervmrc.sh jruby. You could adapt this to include switching to a specific gemset if needed.

Why does the building from Binary files do not require Root access?

When I am in my dept's server, I cannot use commands such as "apt-get install nethack". I have to build the nethack from Binary files to get it working, at least so I have been told. I cannot understand the reason. Why do I need to build things from binaries? Why is the use of the commands, such as "apt-get", forbidden? Why do I not need Root access to build from binaries?
apt-get is a system-level command that installs packages for all users.
If you download and compile, you are only creating local "copies" of the binaries, not system-wide. If you tried to complete the install process with make install this would most likely fail because you do not have sufficient privileges to install the program for all users' access (same reason you can't run apt-get install)
When you compile a program from source, you can give it the '--prefix=~/'. This causes it to install relative to your own home directory (so binary programs typically end up in '~/bin', man pages in '~/man' etc). This poses no problems because you already have permission to write here.
Apt-get on the other hand installs the packages in the global filesystem ('/bin/', '/usr/bin/', etc), which can impact other users and so, quite rightly, require administrative access.
If you want to install some program you can use the command
apt-get source app-name
This will work even if you are not root since it only fetch the source code to the app-name and put it in the current directory, which is easier than having to track down the source and there is a better chance to get it work, since you download the version that should work on your system.
Alternatively you should bug your sysadmin to install the programs you need, since it is his job (and if you need them, chances are that the rest of your team does too).
Because apt-get will install a program system wide.
The locations to which apt-get writes installed files (/bin, /usr/bin, ...) are restricted to root access. I imagine that when you build from source you're not executing the install step of the bulid. You're going to need to set a prefix for the installation such that the packages end up somewhere you can write. This thread talks a bit about setting prefixes for apt-get and you'll probably want to set your prefix to something like
~/software/
and then add the resulting bin directories to your PATH.

Can you compile Apache HTTP Server and redeploy its binaries to a different location?

As part of our product release we ship Apache HTTP Server binaries that we have compiled on our (UNIX) development machine.
We tell our clients to install the binaries (on their UNIX servers) under the same directory structure that we compiled it under. For some clients this is not appropriate, e.g. where there are restrictions on where they can install software on their servers and they don't want to compile Apache themselves.
Is there a way of compiling Apache HTTP Server so its installation location(s) can be specified dynamically using environment variables ?
I spent a few days trying to sort this out and couldn't find a way to do it. It led me to believe that the Apache binaries were hard coding some directory paths at compilation preventing the portability we require.
Has anyone managed to do this ?
I think the way to do(get around) this problem is to develop a "./configure && make" script that your client uses to install, specify and compile the binaries. That would offcourse require that the client has all the source-code installed on his server or you can make it available on an NFS share.
If you are compiling Apache2 for a particular location but want your clients to be able to install it somewhere else (and I'm assuming they have the same architecture and OS as your build machine) then you can do it but the apachectl script will need some after-market hacking.
I just tested these steps:
Unpacked the Apache2 source (this should work with Apache 1.3 as well though) and ran ./configure --prefix=/opt/apache2
Ran make then sudo make install to install on the build machine.
Switch to the install directory (/opt/apache2) and tar and gzip up the binaries and config files. I used cd /opt/apache2; sudo tar cf - apache2 | gzip -c > ~/apache2.tar.gz
Move the tar file to the target machine. I decided to install in /opt/mynewdir/dan/apache2 to test. So basically, your clients can't use rpm or anything like that -- unless you know how to make that relocatable (I don't :-) ).
Anyway, your client's conf/httpd.conf file will be full of hard-coded absolute paths -- they can just change these to whatever they need. The apachectl script also has hard coded paths. It's just a shell script so you can hack it or give them a sed script to convert the old paths from your build machine to the new path on your clients.
I skipped all that hackery and just ran ./bin/httpd -f /opt/mynewdir/dan/conf/httpd.conf :-)
Hope that helps. Let us know any error messages you get if it's not working for you.
I think the way to do(get around) this problem is to develop a "./configure && make" script that your client uses to install, specify and compile the binaries. That would offcourse require that the client has all the source-code installed on his server or you can make it available on an NFS share.
Not to mention a complete build toolchain. These days, GCC doesn't come default with most major distributions. Wouldn't it be sane to force the client to install it to /opt/my_apache2/ or something like that?
#Hissohathair
I suggest 1 change to #Hissohathair's answer.
6). ./bin/httpd -d <server path> (although it can be overridden in the config file)
In apacheclt there is a variable for HTTPD where you could override to use it.