Apache, Ruby (not RVM), Passenger LoadPath - (Bundler::GemNotFound) - ruby-on-rails-3

I'm on an AWS Micro instance using Ruby 1.9.3p327, RubyGems 1.8.23 and Passenger 3.
My server (when I'm not taking it down trying to fix things) is at http://shaanan.cohney.info/gitlab/
I'm attempting to install Gitlab onto Apache2.
It all goes well up to the point where I try and deploy the installation via passenger.
I get the error:
Could not find multi_json-1.5.0 in any of the sources (Bundler::GemNotFound)
/usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.3.0/lib/bundler/spec_set.rb 95 in `block in materialize'
My load path that I have printed is:
["/usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.19/lib", "/usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.3.0/lib", "/usr/local/lib/ruby/site_ruby/1.9.1", "/usr/local/lib/ruby/site_ruby/1.9.1/x86_64-linux", "/usr/local/lib/ruby/site_ruby", "/usr/local/lib/ruby/vendor_ruby/1.9.1", "/usr/local/lib/ruby/vendor_ruby/1.9.1/x86_64-linux", "/usr/local/lib/ruby/vendor_ruby", "/usr/local/lib/ruby/1.9.1", "/usr/local/lib/ruby/1.9.1/x86_64-linux"]
Which is missing the directory where my gems are installed.
I've searched around online but have not yet been able to figure out a fix after trying multiple things.
The error appears when I visit myhostname.com/gitlab
My current guess is that it has something to do with the user for whom I installed ruby, vs the user of the application, vs the user passenger is running as.
UPDATE: I have upgraded to the beta version of passenger to see if it would make a difference. On error it gives me my environment as follows:
APACHE_PID_FILE = /var/run/apache2.pid
SHELL = /bin/bash
APACHE_RUN_USER = www-data
PASSENGER_DEBUG_DIR = /tmp/passenger.spawn-debug.27672-140235446962704
USER = gitlab
APACHE_LOG_DIR = /var/log/apache2
PATH = /usr/local/bin:/usr/bin:/bin
PWD = /home/gitlab/gitlab
APACHE_RUN_GROUP = www-data
LANG = C
SHLVL = 0
HOME = /home/gitlab
LOGNAME = gitlab
APACHE_LOCK_DIR = /var/lock/apache2
APACHE_RUN_DIR = /var/run/apache2
IN_PASSENGER = 1
PYTHONUNBUFFERED = 1
RAILS_ENV = production
RACK_ENV = production
WSGI_ENV = production
PASSENGER_ENV = production
RAILS_RELATIVE_URL_ROOT = production
RACK_BASE_URI = production
PASSENGER_BASE_URI = production
REQUEST_METHOD = GET
SERVER_PORT = 80
SERVER_ADDR = 10.244.35.233
QUERY_STRING =
SERVER_PROTOCOL = HTTP/1.1
REMOTE_PORT = 4789
REMOTE_ADDR = 72.13.132.134
REQUEST_URI = /gitlab/
SERVER_SOFTWARE = Apache/2.2.22 (Ubuntu)
DOCUMENT_ROOT = /var/www/
SERVER_NAME = shaanan.cohney.info
GEM_PATH =
SERVER_ADMIN = webmaster#localhost
BUNDLE_GEMFILE = /home/gitlab/gitlab/Gemfile
_ORIGINAL_GEM_PATH = /home/ubuntu/.rvm/gems/ruby-1.9.3-p392
GEM_HOME = /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1
I've also noted that _ORIGINAL_GEM_PATH is set to an RVM value from when I had it installed.

Related

Why my gitlab runner try to fetch repo with a different url other than which I configured?

I deployed a gitlab runner and a gitlab instance on the same server using docker, after that, I tried to run a few samples to test my runner, but in the first job it always tells me that it can't access my repository. The weird thing is that it try to access a totally different URL instaed of the one in config.toml.
Here is my config:
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "******"
url = "http://172.17.0.3:8010/"
token = "*********"
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
image = "ubuntu"
privileged = false
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
The http://172.17.0.3:8010/ is exact the ip of my gitlab instance inside the docker network.
Here is where the runner try to get my code:
Running with gitlab-runner 13.12.0 (7a6612da)
on third runner 2ieTUrD1
Preparing the "docker" executor
Using Docker executor with image ubuntu:focal ...
Pulling docker image ubuntu:focal ...
Using docker image sha256:7e0aa2d69a153215c790488ed1fcec162015e973e49962d438e18249d16fa9bd for ubuntu:focal with digest ubuntu#sha256:adf73ca014822ad8237623d388cedf4d5346aa72c270c5acc01431cc93e18e2d ...
Preparing environment
00:01
Running on runner-2ieturd1-project-12-concurrent-0 via dfcd09965d50...
Getting source from Git repository
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in /builds/zh****/e****/.git/
fatal: unable to access 'http://8.136.221.242:8010/zh****/e****.git/': Failed to connect to 8.136.221.242 port 8010: Operation timed out
ERROR: Job failed: exit code 1
Can anyone help me, thank you so much!
Update your /etc/gitlab/gitlab.rb file and set
external_url "http://172.17.0.3:8010/"
Once you've saved it run
sudo gitlab-ctl reconfigure

How can I make odoo service run with certain flags?

I'm trying to set the configuration file of my odoo server to certain file, I know that running odoo with the -c <path> or --config <path> will do the work, but I'm running it in the server like a service so I can't do this, neither adding the configs to /etc/odoo/odoo.conf because I need 2 configuration files.
Does someone knows how can I can make odoo service run with certain flags(-c and --load)
Here's my config at /etc/odoo/odoo.conf
[options]
addons_path = /usr/lib/python2.7/distpackages/odoo/addons,/opt/odoo/addons/odoodoto
admin_passwd = XXXXXXXXXXXX
data_dir = /var/lib/odoo
db_host = False
db_name = False
db_password = False
db_port = 5432
db_user = False
demo = {}
log_level = warn
logfile = /var/log/odoo/odoo-server.log
logrotate = True
proxy_mode = False
And my second config:
[connector-options]
workers = 4
export ODOO_CONNECTOR_CHANNELS=root:5
export ODOO_CONNECTOR_PORT=8069
log-level = warn
And the --load=web,connector is the other flag I need
Without getting into many details of the init systems of Ubuntu, you should have a bash script inside /etc/init.d (probably /etc/init.d/odoo-server).
Inside that file insert a line:
DAEMON_OPTS="-c /etc/odoo/odoo.conf"
Use commas to pass more parameters
The file I was looking for is /etc/systemd/system/odoo.service there you can specify in the [Service] options something like this ExecStart=/usr/local/bin/odoo --load=web,connector -c /somedir/odoo-server.conf.
You can also configure some of the service settings in /etc/init.d/odoo like George Daramouskas sayed but I don't really know how and if you can add the flags I wanted

Vagrant up can't find private_key_path

When I try to run vagrant up I get the error:
There are errors in the configuration of this machine. Please fix
the following errors and try again:
SSH:
* `private_key_path` file must exist: /home/buildbot/mykey.pem
However, this file definitely exists. If I run ls -lah /home/buildbot/mykey.pem, it's there. It's owned by my user "buildbot". It has the right permissions. Everything looks good, but yet Vagrant can't see it, even though it's running as user "buildbot". Why would this be?
My Vagrantfile is a fairly generic one for AWS:
# -*- mode: ruby -*-
# vi: set ft=ruby :
require 'vagrant-aws'
Vagrant.configure(2) do |config|
config.vm.box = 'aws-dummy'
config.vm.provider :aws do |aws, override|
aws.keypair_name = 'my-key-pair'
aws.security_groups = ['my-security-group']
aws.access_key_id = ENV['AWS_ACCESS_KEY']
aws.secret_access_key = ENV['AWS_SECRET_KEY']
aws.ami = 'ami-43c92455'
override.ssh.username = 'ubuntu'
override.ssh.private_key_path = ENV['AWS_PRIVATE_KEY_PATH']
end
end

Capistrano 3 runs every command twice (new install) - Configuration issue

I just completed my capistrano installation for the first time. Most of everything is left to default settings, I configured my server, its authentification, and the remote folder, as well as the access to my git repository.
I use capistrano to deploy php code to my server.
cap staging deploy and cap production deploy function, but they run every command twice. It sometimes causes problems when those tasks are executed too quickly on the server, returning error codes, which stops the deploying process.
an example of my output when running cap staging deploy
DEBUG[47ecea59] Running /usr/bin/env if test ! -d ~/www/test_server/repo; then echo "Directory does not exist '~/www/test_server/repo'" 1>&2; false; fi on ftp.cluster013.ovh.net
DEBUG[47ecea59] Command: if test ! -d ~/www/test_server/repo; then echo "Directory does not exist '~/www/test_server/repo'" 1>&2; false; fi
DEBUG[c450e730] Running /usr/bin/env if test ! -d ~/www/test_server/repo; then echo "Directory does not exist '~/www/test_server/repo'" 1>&2; false; fi on ftp.cluster013.ovh.net
DEBUG[c450e730] Command: if test ! -d ~/www/test_server/repo; then echo "Directory does not exist '~/www/test_server/repo'" 1>&2; false; fi
It does the same with every single task, except the one I defined myself (in my deploy.rb, I defined a :set_distant_server task that moves around files with server info)
I am pretty sure I missed something during the initial configuration.
Here is my capfile, still to default settings :
# Load DSL and Setup Up Stages
require 'capistrano/setup'
# Includes default deployment tasks
require 'capistrano/deploy'
# Includes tasks from other gems included in your Gemfile
# require 'capistrano/rvm'
# require 'capistrano/rbenv'
# require 'capistrano/chruby'
#require 'capistrano/bundler'
#require 'capistrano/rails/assets'
#require 'capistrano/rails/migrations'
# Loads custom tasks from `lib/capistrano/tasks' if you have any defined.
Dir.glob('lib/capistrano/tasks/*.rake').each { |r| import r }
Followed by my deploy.rb file:
# config valid only for Capistrano 3.1
lock '3.2.1'
set :scm, :git
set :application, 'Application name'
# I use token authentification
set :repo_url, 'https://XXXXXXXXXXX:#XXXXXXX.git'
set :role, 'web'
# Default value for :log_level is :debug
set :log_level, :debug
set :tmp_dir, 'www/test_server/tmp'
set :keep_releases, 8
role :deploy_server, "XXXuser_name#XXXX_server"
task :set_distant do
on roles(:deploy_server) do
execute 'echo ------------******* STAGING *******------------'
execute 'cp ~/www/test_server/current/access_distant.php ~/www/test_server/current/access.php'
execute 'cp ~/www/test_server/current/session_distant.php ~/www/test_server/current/session.php'
end
end
after "deploy:finished", :set_distant
Here is my staging.rb, much shorter:
server 'XXX_server', user: 'XXXuser_name', roles: %w{web}, port: 22, password: 'XXXpassword'
set :deploy_to, '~/www/test_server'
set :branch, 'staging'
And my production.rb, very similar:
server 'XXX_server', user: 'XXXuser_name', roles: %w{web}, port: 22, password: 'XXXpassword'
set :deploy_to, '~/www/beta/'
I'm pretty sure I missed a step in all the prerequisites to make it run nicely. I am new to ruby, to gems, and didn't use shell for a very long time.
Does anyone see why those commands are run twice, and how I could fix it?
In advance, many many thanks.
Additional info:
Ruby version: ruby -v
ruby 2.1.2p95 (2014-05-08 revision 45877) [x86_64-darwin13.0]
Capistrano version: cap -V
Capistrano Version: 3.2.1 (Rake Version: 10.1.0)
I did not create a Gemfile or set it up, I understood it was not needed in Capistrano 3. Anyway, I would not know how to do it.
I was having this same issue and realized I didn't need both
role :web
and
server '<server>'
I got rid of role :web and that got rid of the 2nd execution.

unicorn working_directory with symlink

We're having trouble hot-deploying with unicorn. We pretty much use the canonical unicorn.rb configs, set the working_directory to point to the symlink'd folder, but somehow it seems stuck on the actual folder when it was first started and fail to follow the symlink.
# config/unicorn.rb
if ENV['RAILS_ENV'] == 'production'
worker_processes 4
else
worker_processes 2
end
working_directory "/var/local/project/symlinkfolder"
# Listen on unix socket
listen "/tmp/unicorn.sock", :backlog => 64
pid "/var/run/unicorn/unicorn.pid"
stderr_path "/var/log/unicorn/unicorn.log"
stdout_path "/var/log/unicorn/unicorn.log"
preload_app true
before_fork do |server, worker|
# the following is highly recomended for Rails + "preload_app true"
# as there's no need for the master process to hold a connection
if defined?(ActiveRecord::Base)
ActiveRecord::Base.connection.disconnect!
end
# Before forking, kill the master process that belongs to the .oldbin PID.
# This enables 0 downtime deploys.
old_pid = "/var/run/unicorn/unicorn.pid.oldbin"
if File.exists?(old_pid) && server.pid != old_pid
begin
Process.kill("QUIT", File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
# someone else did our job for us
end
end
end
after_fork do |server, worker|
# the following is *required* for Rails + "preload_app true",
if defined?(ActiveRecord::Base)
ActiveRecord::Base.establish_connection
end
# this makes sure the logging-rails framework works when preload_app = true
Logging.reopen
# if preload_app is true, then you may also want to check and
# restart any other shared sockets/descriptors such as Memcached,
# and Redis. TokyoCabinet file handles are safe to reuse
# between any number of forked children (assuming your kernel
# correctly implements pread()/pwrite() system calls)
end
When we issue a USR2, we see this in the unicorn log:
executing ["/var/local/project/project.d/6/vendor/bundle/ruby/1.9.1/bin/unicorn_rails", "-E", "staging", "-D", "-c", "/var/local/project/symlinkfolder/config/unicorn.rb"│·
, {12=>#<Kgio::UNIXServer:fd 12>}] (in /var/local/project/project.d/8)
so unicorn is somehow 'stuck' on version 6, whilst the actual symlinked folder is on version 8 ... this becomes a problem as soon as we prune folder for version 6 after a few deploys...
The working_directory is set to the symlink'd folder
The symlink points to /var/local/project/project.d/[id] folder correctly
We update the symlink before sending the USR2 signal
What did we miss??
The solution was to explicitly set the unicorn binary path, as explained (in somewhat confusing way) on http://unicorn.bogomips.org/Sandbox.html
app_root = "/var/local/project/symlinkfolder"
working_directory app_root
# see http://unicorn.bogomips.org/Sandbox.html
Unicorn::HttpServer::START_CTX[0] = "#{app_root}/vendor/bundle/ruby/1.9.1/bin/unicorn_rails"
Then we needed to issue a unicorn reload (kill -HUP) command, so unicorn reloads the config file. And from then on, issuing a USR2 signal works properly.