I want to deploy my application on remote test server using capistrano gem.
Both git and rails should run on same server.
I have 2 users 'git' for git repositories and 'rails' with installed rvm. After git push i want to execute hook post-receive which runs su rails and then cap deploy.
When i tried push i got message:
remote: su: must be run from a terminal
How can i work around this message. Can i enable tty some way over git ssh connection?
I can give up from capistrano for this case but still i want rvm and rails to be used only by user rails (so su probably have to be used in each case).
edit
Now i walk around problem. Probably this is very bad solution but works ;). From bellow script's i removed original paths and echo's.
post-receive hook before walkaround
#!/bin/bash
while read oldrev newrev ref
do
su rails #here script fails
cd /path/to/rails/app/current/ && cap deploy
done
post-recive now
#!/bin/bash
while read oldrev newrev ref
do
ssh rails#localhost '/path/to/scripts/deploy.sh'
done
deploy.sh script
#!/bin/bash
CAP_DIR="/path/to/capistrano/dir"
RUBY="1.9.3-p194"
GEMSET="gemset_name"
[[ -s "$HOME/.rvm/scripts/rvm" ]] && . "$HOME/.rvm/scripts/rvm"
rvm use $RUBY
rvm gemset use $GEMSET
cd $CAP_DIR
cap deploy
Related
I have a gitlab installation and I am trying to setup a gitlab-runner using a docker executor. All ok until tests start running and then since my projects are private and they have no http access enabled, they fail at clone time with:
Running with gitlab-runner 10.0.2 (a9a76a50)
on Jupiter-docker (5f4ed288)
Using Docker executor with image fedora:26 ...
Using docker image sha256:1f082f05a7fc20f99a4ccffc0484f45e6227984940f2c57d8617187b44fd5c46 for predefined container...
Pulling docker image fedora:26 ...
Using docker image fedora:26 ID=sha256:b0b140824a486ccc0f7968f3c6ceb6982b4b77e82ef8b4faaf2806049fc266df for build container...
Running on runner-5f4ed288-project-5-concurrent-0 via 2705e39bc3d7...
Cloning repository...
Cloning into '/builds/pmatos/tob'...
remote: Git access over HTTP is not allowed
fatal: unable to access 'https://gitlab.linki.tools/pmatos/tob.git': The requested URL returned error: 403
ERROR: Job failed: exit code 1
I have looked into https://docs.gitlab.com/ee/ci/ssh_keys/README.html
and decided to give it a try so my .gitlab-ci.yml starts with:
image: fedora:26
before_script:
# Install ssh-agent if not already installed, it is required by Docker.
# (change apt-get to yum if you use a CentOS-based image)
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
# Run ssh-agent (inside the build environment)
- eval $(ssh-agent -s)
# Add the SSH key stored in SSH_PRIVATE_KEY variable to the agent store
- ssh-add <(echo "$SSH_PRIVATE_KEY")
# For Docker builds disable host key checking. Be aware that by adding that
# you are suspectible to man-in-the-middle attacks.
# WARNING: Use this only with the Docker executor, if you use it with shell
# you will overwrite your user's SSH config.
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
... JOBS...
I setup the SSH_PRIVATE_KEY correctly, etc but the issue is that the cloning of the project happens before before_script. I then tried to start the container with -v /home/pmatos/gitlab-runner_ssh:/root/.ssh but still the cloning is trying to use HTTP. How can I force the container to clone through ssh?
Due to the way gitlab CI works, CI requires https access to the repository. Therefore if you enable CI, you need to have https repo access enabled as well.
This is however, not an issue privacy wise as making the container https accessible doesn't stop gitlab from checking if you're authorized to access it.
I then tried to start the container with -v /home/pmatos/gitlab-runner_ssh:/root/.ssh but still the cloning is trying to use HTTP
Try at least if possible within your container to add a
git config --global url.ssh://git#.insteadOf https://
(assuming the ssh user is git)
That would make any clone of any https URL use ssh.
I want to write a Dockerfile which exports a directory from a remote Subversion repository into the build context so I can work with these files in subsequent commands. The repository is secured with user/password authentication.
That Dockerfile could look like this:
# base image
FROM ubuntu
# install subversion client
RUN apt-get -y update && apt-get install -y subversion
# export my repository
RUN svn export --username=myUserName --password=myPassword http://subversion.myserver.com/path/to/directory
# further commands, e.g. on container start run a file just downloaded from the repository
CMD ["/bin/bash", "path/to/file.sh"]
However, this has the drawback of printing my username and password on the screen or any logfile where the stdout is directed, as in Step 2 : RUN svn export --username=myUserName --password=myPassword http://subversion.myserver.com/path/to/directory. In my case, this is a Jenkins build log which is also accessible by other people who are not supposed to see the credentials.
What would be the easiest way to hide the echo of username and password in the output?
Until now, I have not found any way how to execute RUN commands in a Dockerfile silently when building the image. Could the password maybe be imported from somewhere else and attached to the command beforehand so it does not have to be printed anymore? Or are there any methods for password-less authentication in Subversion that would work in the Dockerfile context (in terms of setting them up without interaction)?
The Subversion Server is running remotely in my company and not on my local machine or the Docker host. To my knowledge, I have no access to it except for accessing my repository via username/password authentication, so copying any key files as root to some server folders might be difficult.
The Dockerfile RUN command is always executed and cached when the docker image is build so the variables that svn needs to authenticate must be provided at build time. You can move the svn export call when the docker run is executed in order to avoid this kind of problems. In order to do that you can create a bash script and declare it as a docker entrypoint and pass environment variables for username and password. Example
# base image
FROM ubuntu
ENV REPOSITORY_URL http://subversion.myserver.com/path/to/directory
# install subversion client
RUN apt-get -y update && apt-get install -y subversion
# make it executable before you add it here otherwise docker will coplain
ADD docker-entrypoint.sh /enrypoint.sh
ENTRYPOINT /entrypoint.sh
docker-entrypoint.sh
#!/bin/bash
# maybe here some validation that variables $REPO_USER $REPO_PASSOWRD exists.
svn export --username="$REMOTE_USER" --password="$REMOTE_PASSWORD" "$REPOSITORY_URL"
# continue execution
path/to/file.sh
Run your image:
docker run -e REPO_USER=jane -e REPO_PASSWORD=secret your/image
Or you can put the variables in a file:
.svn-credentials
REPO_USER=jane
REPO_PASSWORD=secret
Then run:
docker run --env-file .svn-credentials your/image
Remove the .svn-credentials file when your done.
Maybe using SVN with SSH is a solution for you? You could generate a public/private key pair. The private key could be added to the image whereas the public key gets added to the server.
For more details you could have a look at this stackoverflow question.
One solution is to ADD the entire SVN directory you previously checked out on your builder file-system (or added as a svn:externals if your Dockerfile is itself in a SVN repository like this: svn propset svn:externals 'external_svn_directory http://subversion.myserver.com/path/to/directory' ., then do a svn up).
Then in your Dockerfile you can simply have this:
ADD external_svn_directory /tmp/external_svn_directory
RUN svn export /tmp/external_svn_directory /path/where/to/export/to
RUN rm -rf /tmp/external_svn_directory
Subversion stores authentication details (if it not disabled in configuration) at client side and use stored username|password on request for the subsequent operations on the same URL.
Thus - you have to run (successful) svn export in Dockerfile with username|password only once and allow SVN to use cached credentials (remove auth. options from command-line) later
I am following Michael Hartl's tutorial and ran this 2 blocks of code
$ rvm get head && rvm reload
$ chmod +x $rvm_path/hooks/after_cd_bundler
$ cd ~/rails_projects/sample_app
$ bundle install --without production --binstubs=./bundler_stubs
now when I run Guard on my first terminal window everything is fine, but when I open another terminal window and run the exact same command, it complaints that I am running Guard outside of Bundler. Why is that so?
Still can't post images but here is the screenshot of the 2 separate terminal windows
terminal 1
terminal 2
Thanks!
Ryan
tests
rvm current - is the proper ruby selected
echo $PATH - first position should be path to .../bundler_stubs
solutions
for both errors it might work again with cd . - but this might be problematic
you need to make sure that RVM was loaded properly, and that proper ruby was loaded in session:
make sure you use gnome-terminal with login session enabled: https://rvm.beginrescueend.com/integration/gnome-terminal/
run rvm get head --auto - keep a gist of it in case
restart your computer - it's required in some cases
make sure you do not overwrite PATH after RVM was loaded in your RC scripts
I'm just starting to use git-deploy instead of capistrano, the problem is though that I'm using rvm on my server and the two are not mixing well.
Here is a link to the git-deploy which I'm using:
https://github.com/mislav/git-deploy
I'm using ruby 1.9.2-p180 on my server installed through rvm for the user. When I run my git push and git deploy runs my scripts in the deploy it installs the gems in vendor/.bundle instead of my gems directory: /home/vps/.rvm/gems
Here is my deploy/after_push script
#!/usr/bin/env bash
set -e
oldrev=$1
newrev=$2
run() {
[ -x $1 ] && $1 $oldrev $newrev
}
echo files changed: $(git diff $oldrev $newrev --diff-filter=ACDMR --name-only | wc -l)
umask 002
git submodule init && git submodule sync && git submodule update
export GEM_HOME=/home/vps/.rvm/gems/ruby-1.9.2-p180
export MY_RUBY_HOME=/home/vps/.rvm/rubies/ruby-1.9.2-p180
export GEM_PATH=/home/vps/.rvm/gems/ruby-1.9.2-p180:/home/vps/.rvm/gems/ruby-1.9.2-p180#global
export RUBY_VERSION=ruby-1.9.2-p180
export PATH=/home/vps/.rvm/gems/ruby-1.9.2-p180/bin:/home/vps/.rvm/gems/ruby-1.9.2-p180#global/bin:/home/vps/.rvm/rubies/ruby-1.9.2-p180/bin:/home/vps/.rvm/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
export rvm_config_path=/home/vps/.rvm/config
export rvm_path=/home/vps/.rvm
export rvm_examples_path=/home/vps/.rvm/examples
export rvm_rubies_path=/home/vps/.rvm/rubies
export rvm_usr_path=/home/vps/.rvm/usr
export rvm_src_path=/home/vps/.rvm/src
export rvm_version=1.6.3
export rvm_gems_path=/home/vps/.rvm/gems
export rvm_ruby_string=ruby-1.9.2-p180
export rvm_tmp_path=/home/vps/.rvm/tmp
export rvm_lib_path=/home/vps/.rvm/lib
export rvm_repos_path=/home/vps/.rvm/repos
export rvm_log_path=/home/vps/.rvm/log
export rvm_help_path=/home/vps/.rvm/help
export rvm_environments_path=/home/vps/.rvm/environments
export rvm_archives_path=/home/vps/.rvm/archives
rvm use 1.9.2
run deploy/before_restart
run deploy/restart && run deploy/after_restart
Here is my deploy/before_restart
#!/home/vps/.rvm/rubies/ruby-1.9.2-p180/bin/ruby
oldrev, newrev = ARGV
def run(cmd)
exit($?.exitstatus) unless system "umask 002 && #{cmd}"
end
RAILS_ENV = ENV['RAILS_ENV'] || 'production'
use_bundler = File.file? 'Gemfile'
rake_cmd = use_bundler ? 'bundle exec rake' : 'rake'
if use_bundler
bundler_args = ['--deployment']
BUNDLE_WITHOUT = ENV['BUNDLE_WITHOUT'] || 'development:test'
bundler_args << '--without' << BUNDLE_WITHOUT unless BUNDLE_WITHOUT.empty?
# update gem bundle
run "bundle install #{bundler_args.join(' ')}"
end
if File.file? 'Rakefile'
num_migrations = `git diff #{oldrev} #{newrev} --diff-filter=A --name-only`.split("\n").size
# run migrations if new ones have been added
run "#{rake_cmd} db:migrate RAILS_ENV=#{RAILS_ENV}" if num_migrations > 0
end
# clear cached assets (unversioned/ignored files)
run "git clean -x -f -- public/stylesheets public/javascripts"
# clean unversioned files from vendor/plugins (e.g. old submodules)
run "git clean -d -f -- vendor/plugins"
Not only does it install it in vendor/.bundle but it installs it for the system version of ruby which is 1.9.1 so I cannot use it with my rvm version which is what apache2 is running. My current work around for all this is to manually ssh in and run bundle install in that directory.
Is there a cleaner way of doing this?
Do I have to have all those exports in my script file?
Update:
Even when I manually go into the directory and run bundle install it puts the gems into vendor/bundle for some reason.
Update:
After entering the following in my before_restart
run "ruby -v"
run "type ruby"
I get this result:
ruby 1.9.2p180 (2011-02-18 revision 30909) [i686-linux]
ruby is /home/vps/.rvm/rubies/ruby-1.9.2-p180/bin/ruby
I've taken out the bundler_args, but it still insists on installing my gems in vendor/bundle for ruby 1.9.1
Running bundle install --deloyment intentionally places gems in the vendor directory.
It's pretty unlikely that you actually have Ruby 1.9.1 installed. If you are using a Debian-derived distribution then it's because the package is misnamed as 1.9.1 when it actually installs 1.9.2.
Otherwise I'm not really sure why your rvm use 1.9.2 line would not be taking effect. Do run "ruby -v" perhaps in the before_restart script and check the version or run "type ruby" to check its path.
I'm struggling to get delayed_job working under rails 3.0.9 (ruby 1.9.2). The only way I have succeeded to run is to tape manualy the command rake jobs:work.
But I want that to be automatically started when the rails application is starting.
I have installed monit under ubuntu and I configured it to launch a file located in my app. This fails looks like:
check process delayed_job with pidfile /home/me/myapp/tmp/pids/delayed_job.pid
start program = "/home/me/myapp/script/delayed_job start"
stop program = "/home/me/myapp/script/delayed_job stop"
And I added the environment setting in the delayed_job script file:
#!/usr/bin/env ruby
ENV['RAILS_ENV'] = "development"
require File.expand_path(File.join(File.dirname(__FILE__), '..', 'config', 'environment'))
require 'delayed/command'
Delayed::Command.new(ARGV).daemonize
When I run the command "sudo monit start delayed_job" I get the following error:
/usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require': no such file to load -- bundler/setup (LoadError)
So I guess it is because sudo is using a wrong version of ruby environment
I tried then the solution of:
rvm monit delayed_job
by adding rvm -S in the start program / stop program lines.
But it still failing with the error : rvm command not found
my rvm dir is located in my home dir /home/me/.rvm
I tried to find workarounds in (sudo changes PATH - why?) to change the PATH environment variable by adding
/usr/bin/env PATH=/home/me/.rvm/bin:$PATH
The command "sudo monit start delayed_job" succeeded! and the worker started.
But the issue is: When I launch sudo /etc/init.d/monit start and when I look to the syslog I still get 'delayed_job' failed to start
So I don't know how to investigate more, how to get more verbose errors for monit.
I have finally succeeded to solve this issue.
I modified the monit file like this:
check process delayed_job with pidfile /home/me/myapp/tmp/pids/delayed_job.pid
start program = "/bin/su - me -c 'cd /home/me/myapp/; script/delayed_job start'"
stop program = "/bin/su - me -c 'cd /home/me/myapp/; script/delayed_job stop'"
I have also downgraded the daemons gem because it seems that there are problems with the latest version. So I'm using now daemons v 1.0.10
I also modified the rights of the log file /home/me/myapp/log/delayed_job.log, because it seems that is was created before my root and my user had no access to it (I had problems to test the command "script/delayed_job start" with "me" user)
This i s the only line that worked for me that read the ENV properly
start program = "/usr/local/rvm/bin/rvm-shell -c 'cd /var/www/[APP]/current/; RAILS_ENV=production bundle exec bin/delayed_job start'"
Hope it helps!