We're having trouble hot-deploying with unicorn. We pretty much use the canonical unicorn.rb configs, set the working_directory to point to the symlink'd folder, but somehow it seems stuck on the actual folder when it was first started and fail to follow the symlink.
# config/unicorn.rb
if ENV['RAILS_ENV'] == 'production'
worker_processes 4
else
worker_processes 2
end
working_directory "/var/local/project/symlinkfolder"
# Listen on unix socket
listen "/tmp/unicorn.sock", :backlog => 64
pid "/var/run/unicorn/unicorn.pid"
stderr_path "/var/log/unicorn/unicorn.log"
stdout_path "/var/log/unicorn/unicorn.log"
preload_app true
before_fork do |server, worker|
# the following is highly recomended for Rails + "preload_app true"
# as there's no need for the master process to hold a connection
if defined?(ActiveRecord::Base)
ActiveRecord::Base.connection.disconnect!
end
# Before forking, kill the master process that belongs to the .oldbin PID.
# This enables 0 downtime deploys.
old_pid = "/var/run/unicorn/unicorn.pid.oldbin"
if File.exists?(old_pid) && server.pid != old_pid
begin
Process.kill("QUIT", File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
# someone else did our job for us
end
end
end
after_fork do |server, worker|
# the following is *required* for Rails + "preload_app true",
if defined?(ActiveRecord::Base)
ActiveRecord::Base.establish_connection
end
# this makes sure the logging-rails framework works when preload_app = true
Logging.reopen
# if preload_app is true, then you may also want to check and
# restart any other shared sockets/descriptors such as Memcached,
# and Redis. TokyoCabinet file handles are safe to reuse
# between any number of forked children (assuming your kernel
# correctly implements pread()/pwrite() system calls)
end
When we issue a USR2, we see this in the unicorn log:
executing ["/var/local/project/project.d/6/vendor/bundle/ruby/1.9.1/bin/unicorn_rails", "-E", "staging", "-D", "-c", "/var/local/project/symlinkfolder/config/unicorn.rb"│·
, {12=>#<Kgio::UNIXServer:fd 12>}] (in /var/local/project/project.d/8)
so unicorn is somehow 'stuck' on version 6, whilst the actual symlinked folder is on version 8 ... this becomes a problem as soon as we prune folder for version 6 after a few deploys...
The working_directory is set to the symlink'd folder
The symlink points to /var/local/project/project.d/[id] folder correctly
We update the symlink before sending the USR2 signal
What did we miss??
The solution was to explicitly set the unicorn binary path, as explained (in somewhat confusing way) on http://unicorn.bogomips.org/Sandbox.html
app_root = "/var/local/project/symlinkfolder"
working_directory app_root
# see http://unicorn.bogomips.org/Sandbox.html
Unicorn::HttpServer::START_CTX[0] = "#{app_root}/vendor/bundle/ruby/1.9.1/bin/unicorn_rails"
Then we needed to issue a unicorn reload (kill -HUP) command, so unicorn reloads the config file. And from then on, issuing a USR2 signal works properly.
Related
When i am trying to start or stop apache with the command
/ebiz/apache/httpd/sbin/apachectl -k stop -f /ebiz/apache/httpd/conf/httpd.conf
Getting this error:-
/ebiz/apache/httpd/sbin/apachectl: line 122: ./httpd: No such file or directory
But when i am trying to execute same command from sbin location it is working fine. I am placing my apache in custom location and my apachectl is in "/ebiz/apache/httpd/sbin" location and httpd.conf is in "/ebiz/apache/httpd/conf/" location
If i look my apachectl file
#!/bin/sh
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#
# Apache control script designed to allow an easy command line interface
# to controlling Apache. Written by Marc Slemko, 1997/08/23
#
# The exit codes returned are:
# XXX this doc is no longer correct now that the interesting
# XXX functions are handled by httpd
# 0 - operation completed successfully
# 1 -
# 2 - usage error
# 3 - httpd could not be started
# 4 - httpd could not be stopped
# 5 - httpd could not be started during a restart
# 6 - httpd could not be restarted during a restart
# 7 - httpd could not be restarted during a graceful restart
# 8 - configuration syntax error
#
# When multiple arguments are given, only the error from the _last_
# one is reported. Run "apachectl help" for usage info
#
ACMD="$1"
ARGV="$#"
#
# |||||||||||||||||||| START CONFIGURATION SECTION ||||||||||||||||||||
# -------------------- --------------------
#
# the path to your httpd binary, including options if necessary
HTTPD='./httpd'
#
#
# a command that outputs a formatted text version of the HTML at the
# url given on the command line. Designed for lynx, however other
# programs may work.
if [ -x "/usr/bin/links" ]; then
LYNX="/usr/bin/links -dump"
else
LYNX=none
fi
#
# the URL to your server's mod_status status page. If you do not
# have one, then status and fullstatus will not work.
STATUSURL="http://localhost:80/server-status"
#
# Set this variable to a command that increases the maximum
# number of file descriptors allowed per child process. This is
# critical for configurations that use many file descriptors,
# such as mass vhosting, or a multithreaded server.
ULIMIT_MAX_FILES="ulimit -S -n `ulimit -H -n`"
# -------------------- --------------------
# |||||||||||||||||||| END CONFIGURATION SECTION ||||||||||||||||||||
# Set the maximum number of file descriptors allowed per child process.
if [ "x$ULIMIT_MAX_FILES" != "x" ] ; then
$ULIMIT_MAX_FILES
fi
ERROR=0
if [ "x$ARGV" = "x" ] ; then
ARGV="-h"
fi
function checklynx() {
if [ "$LYNX" = "none" ]; then
echo "The 'links' package is required for this functionality."
exit 8
fi
}
function testconfig() {
# httpd is denied terminal access in SELinux, so run in the
# current context to get stdout from $HTTPD -t.
if test -x /usr/sbin/selinuxenabled && /usr/sbin/selinuxenabled; then
runcon -- `id -Z` $HTTPD $OPTIONS -t
else
$HTTPD $OPTIONS -t
fi
ERROR=$?
}
case $ACMD in
start|stop|restart|graceful|graceful-stop)
$HTTPD $OPTIONS -k $ARGV
ERROR=$?
;;
startssl|sslstart|start-SSL)
echo The startssl option is no longer supported.
echo Please edit httpd.conf to include the SSL configuration settings
echo and then use "apachectl start".
ERROR=2
;;
configtest)
testconfig
;;
status)
checklynx
$LYNX $STATUSURL | awk ' /process$/ { print; exit } { print } '
;;
fullstatus)
checklynx
$LYNX $STATUSURL
;;
*)
$HTTPD $OPTIONS "$#"
ERROR=$?
esac
exit $ERROR
On the line number 122 it is looking for this "$HTTPD $OPTIONS "$#"". How i can execute apache from anywhere
You can see that the path to httpd is hardcoded to the relative path ./httpd in apachectl:
# the path to your httpd binary, including options if necessary
HTTPD='./httpd'
which points to the wrong location if you're not in the right working directory.
It should be sufficient to change the path to the absolute path to the binary, i.e.
# the path to your httpd binary, including options if necessary
HTTPD=/ebiz/apache/httpd/sbin/httpd
I just completed my capistrano installation for the first time. Most of everything is left to default settings, I configured my server, its authentification, and the remote folder, as well as the access to my git repository.
I use capistrano to deploy php code to my server.
cap staging deploy and cap production deploy function, but they run every command twice. It sometimes causes problems when those tasks are executed too quickly on the server, returning error codes, which stops the deploying process.
an example of my output when running cap staging deploy
DEBUG[47ecea59] Running /usr/bin/env if test ! -d ~/www/test_server/repo; then echo "Directory does not exist '~/www/test_server/repo'" 1>&2; false; fi on ftp.cluster013.ovh.net
DEBUG[47ecea59] Command: if test ! -d ~/www/test_server/repo; then echo "Directory does not exist '~/www/test_server/repo'" 1>&2; false; fi
DEBUG[c450e730] Running /usr/bin/env if test ! -d ~/www/test_server/repo; then echo "Directory does not exist '~/www/test_server/repo'" 1>&2; false; fi on ftp.cluster013.ovh.net
DEBUG[c450e730] Command: if test ! -d ~/www/test_server/repo; then echo "Directory does not exist '~/www/test_server/repo'" 1>&2; false; fi
It does the same with every single task, except the one I defined myself (in my deploy.rb, I defined a :set_distant_server task that moves around files with server info)
I am pretty sure I missed something during the initial configuration.
Here is my capfile, still to default settings :
# Load DSL and Setup Up Stages
require 'capistrano/setup'
# Includes default deployment tasks
require 'capistrano/deploy'
# Includes tasks from other gems included in your Gemfile
# require 'capistrano/rvm'
# require 'capistrano/rbenv'
# require 'capistrano/chruby'
#require 'capistrano/bundler'
#require 'capistrano/rails/assets'
#require 'capistrano/rails/migrations'
# Loads custom tasks from `lib/capistrano/tasks' if you have any defined.
Dir.glob('lib/capistrano/tasks/*.rake').each { |r| import r }
Followed by my deploy.rb file:
# config valid only for Capistrano 3.1
lock '3.2.1'
set :scm, :git
set :application, 'Application name'
# I use token authentification
set :repo_url, 'https://XXXXXXXXXXX:#XXXXXXX.git'
set :role, 'web'
# Default value for :log_level is :debug
set :log_level, :debug
set :tmp_dir, 'www/test_server/tmp'
set :keep_releases, 8
role :deploy_server, "XXXuser_name#XXXX_server"
task :set_distant do
on roles(:deploy_server) do
execute 'echo ------------******* STAGING *******------------'
execute 'cp ~/www/test_server/current/access_distant.php ~/www/test_server/current/access.php'
execute 'cp ~/www/test_server/current/session_distant.php ~/www/test_server/current/session.php'
end
end
after "deploy:finished", :set_distant
Here is my staging.rb, much shorter:
server 'XXX_server', user: 'XXXuser_name', roles: %w{web}, port: 22, password: 'XXXpassword'
set :deploy_to, '~/www/test_server'
set :branch, 'staging'
And my production.rb, very similar:
server 'XXX_server', user: 'XXXuser_name', roles: %w{web}, port: 22, password: 'XXXpassword'
set :deploy_to, '~/www/beta/'
I'm pretty sure I missed a step in all the prerequisites to make it run nicely. I am new to ruby, to gems, and didn't use shell for a very long time.
Does anyone see why those commands are run twice, and how I could fix it?
In advance, many many thanks.
Additional info:
Ruby version: ruby -v
ruby 2.1.2p95 (2014-05-08 revision 45877) [x86_64-darwin13.0]
Capistrano version: cap -V
Capistrano Version: 3.2.1 (Rake Version: 10.1.0)
I did not create a Gemfile or set it up, I understood it was not needed in Capistrano 3. Anyway, I would not know how to do it.
I was having this same issue and realized I didn't need both
role :web
and
server '<server>'
I got rid of role :web and that got rid of the 2nd execution.
chef-solo hangs at the end when installing redis as if chef is waiting for some event to occur. Here is output when I had to kill it with ctrl+c.
[2013-05-14T15:55:27+00:00] ERROR: Running exception handlers
[2013-05-14T15:55:27+00:00] ERROR: Exception handlers complete
Chef Client failed. 8 resources updated
[2013-05-14T15:55:27+00:00] FATAL: Stacktrace dumped to /home/ubuntu/cache/chef-stacktrace.out
[2013-05-14T15:55:27+00:00] FATAL: Chef::Exceptions::MultipleFailures: Multiple failures occurred:
* SystemExit occurred in chef run: service[redis] (redis::default line 107) had an error: SystemExit: exit
* Chef::Exceptions::Exec occurred in delayed notification: service[redis] (redis::default line 83) had an error: Chef::Exceptions::Exec: /sbin/start redis returned 1, expected 0
I am new to chef and unable to figure out why this is happening. Has anyone noticed this behaviour before?
Here is my recipe file
package "build-essential" do
action :install
end
user node[:redis][:user] do
action :create
system true
shell "/bin/false"
end
directory node[:redis][:dir] do
owner node[:redis][:user]
group node[:redis][:user]
mode "0755"
action :create
end
directory node[:redis][:data_dir] do
owner node[:redis][:user]
group node[:redis][:user]
mode "0755"
action :create
end
directory node[:redis][:log_dir] do
owner node[:redis][:user]
group node[:redis][:user]
mode "0755"
action :create
end
remote_file "#{Chef::Config[:file_cache_path]}/redis-2.6.10.tar.gz" do
source "http://redis.googlecode.com/files/redis-2.6.10.tar.gz"
action :create_if_missing
end
# Adding 'make test' causes the install to freeze for some reason.
bash "compile_redis_source" do
cwd Chef::Config[:file_cache_path]
code <<-EOH
tar zxf redis-2.6.10.tar.gz
cd redis-2.6.10
make && sudo make install
# to give permissions to the executables that it copied to.
chown -R redis:redis /usr/local/bin
EOH
creates "/usr/local/bin/redis-server"
end
service "redis" do
provider Chef::Provider::Service::Upstart
subscribes :restart, resources(:bash => "compile_redis_source")
supports :restart => true, :start => true, :stop => true
end
template "redis.conf" do
path "#{node[:redis][:dir]}/redis.conf"
source "redis.conf.erb"
owner node[:redis][:user]
group node[:redis][:user]
mode "0644"
notifies :restart, resources(:service => "redis")
end
template "redis.upstart.conf" do
path "/etc/init/redis.conf"
source "redis.upstart.conf.erb"
owner node[:redis][:user]
group node[:redis][:user]
mode "0644"
notifies :restart, resources(:service => "redis")
end
service "redis" do
action [:enable, :start]
end
There are 2 service "redis" resource statements, is that a problem? or how does chef workout in this case, does it merge into a single resource when running?
I am using upstart and here is the redis.upstart.conf.erb file. Not sure if anything is wrong with this. Does the order of the statement matter in this file?
#!upstart
description "Redis Server"
emits redis-server
# run when the local FS becomes available
start on local-filesystems
stop on shutdown
setuid redis
setgid redis
expect fork
# Respawn unless redis dies 10 times in 5 seconds
#respawn
#respawn limit 10 5
# start a default instance
instance $NAME
env NAME=redis
#instance $NAME
# run redis as the correct user
#setuid redis
#setgid redis
# run redis with the correct config file for this instance
exec /usr/local/bin/redis-server /etc/redis/redis.conf
respawn
#respawn limit 10 5
I think Dmytro was on the right path, but not exactly.
I see that you are using Upstart as the service provider in Chef. Please check your Upstart config for redis-server for any expect statement. If you have an expect fork or expect daemon statement in there, it means that when starting redis-server, Upstart will be waiting for the Redis service to fork once or twice respectively. If you have daemonize no in the redis.conf, Redis process will never fork, and therefore Upstart just hangs at the execution of the init script.
Your redis is not failing to start, it simply runs in the foreground.
I had similar problem with one of the Redis cookbooks I was using. In the redis.conf.erb file it had configuration option
daemonize no
Some other cookbooks have this option configurable by attribute. So, your fix would depend on the cookbook you are using. Either edit your redis.conf.erb file or find how that attribute is configured and set it to yes.
What possible reasons can Sidekiq prevent from processing jobs in the queue? The queue is full. The log file sidekiq.log indicates no activity at all. Thus the queue is full but the log is empty, and Sidekiq does not seem to process items. There seem to no worker processing jobs. Restarting Redis or flush it with FLUSHALL or FLUSHDB has no effect. Sidekiq has been started with
bundle exec sidekiq -L log/sidekiq.log
and produces the following log file:
2013-05-30..Booting Sidekiq 2.12.0 using redis://localhost:6379/0 with options {}
2013-05-30..Running in ruby 1.9.3p374 (2013-01-15 revision 38858) [i686-linux]
2013-05-30..See LICENSE and the LGPL-3.0 for licensing details.
2013-05-30..Starting processing, hit Ctrl-C to stop
How can you find out what went wrong? Are there any hidden log files?
The reason was in our case: Sidekiq may look for the wrong queue. By default Sidekiq uses a queue named "default". We used two different queue names, and defined them in config/sidekiq.yml
# configuration file for Sidekiq
:queues:
- queue_name_1
- queue_name_2
The problem is that this config file is not automatically loaded by default in your development environment (unlike database.yml or thinking_sphinx.yml for instance) by a simple bundle exec sidekiq command. Thus we wrote our jobs in two certain queues, and Sidekiq was waiting for jobs in a third queue (the default one). You have to pass the path to the config file as a parameter through the -Cor --config option:
bundle exec sidekiq -C ./config/sidekiq.yml
or you can pass the queue names directly (no spaces allowed here after the comma):
bundle exec sidekiq -q queue_name_1,queue_name_2
To find the problem out it is helpful to pass the option -v or --verbose at the command line, too, or to use :verbose: true in the sidekiq.yml file. Everything which is defined in a config file is of course useless if the config file is not loaded.. Therefore make sure you are using the right config file first.
If you have a config/sidekiq.yml check that all the queues are defined there, check this sample file: https://github.com/mperham/sidekiq/blob/master/examples/config.yml
If you are passing queue names in the command line or Procfile, something similar to
bin/sidekiq -q queue1 -q queue2
bundle exec sidekiq -q queue1 -q queue2
check that all your queues are defined there.
In case you are not sure about the names of your queues, you can figure it out with the following script:
require "sidekiq/api"
stats = Sidekiq::Stats.new
stats.queues
# {"production_mailers"=>25, "production_default"=>1}
Then, you can do things with the queues:
queue = Sidekiq::Queue.new("production_mailers")
queue.count
queue.clear
It took me hours to find out that I had set config.active_job.queue_name_prefix = "xxxxx_#{Rails.env}". The queue names in the settings look the same, but sidekiq looks for the queue with prefix.
Wrong setting
app/jobs/my_job.rb
class MyJob < ApplicationJob
queue_as :default
end
config/sidekiq.yml
:queues:
- default
Correct setting
app/jobs/my_job.rb
class MyJob < ApplicationJob
queue_as :default
end
config/sidekiq.yml
:queues:
- xxxxx_development_default
- xxxxx_production_default
My problem was I had a configure_server but not configure_client in my initialiser, you must have both:
Sidekiq.configure_server do |config|
config.redis = { url: ENV.fetch('SIDEKIQ_REDIS_URL', 'redis://127.0.0.1:6379/1') }
end
Sidekiq.configure_client do |config|
config.redis = { url: ENV.fetch('SIDEKIQ_REDIS_URL', 'redis://127.0.0.1:6379/1') }
end
In my case, sidekiq was fine in development, but stuck in staging. It was human error on the capistrano's deploy configuration. I set the path for sidekiq.yml incorrectly in the Capfile (shared instead of current).
It failed silently:
# Capfile
# WRONG:
set :sidekiq_config, -> { File.join(shared_path, 'config', 'sidekiq.yml') }
^^^^^^^^^^^
# RIGHT:
set :sidekiq_config, -> { File.join(current_path, 'config', 'sidekiq.yml') }
flushing redis worked for me.
WARNING: THIS WILL REMOVE ALL DATA IN YOUR REDIS DATABASE.
redis-cli flushall
I was banging my head against a brick wall on this for a while, my issue was that sidekiq required a newer version of redis-server. I ran "bundle exec sidekiq" and that revealed the error. Once I updated to a newer version of redis-server it was fine.
I just had this issue. Turns out I had made a syntax error in my sidekiq.yml
Spent at least two hours on this as well because queues and configuration and web UI were all fine ... the jobs were just not processed.
My issue was that the sidekiq-server was not running in my docker-compose setup even though it should have been started in the command-section here:
sidekiq:
depends_on:
- 'proddb'
- 'redis'
build: rails-app
--> command: bundle exec sidekiq --environment ${RAILS_ENV} -C config/sidekiq.yml
volumes:
- './rails-app:/project'
- '/project/tmp' # don't mount tmp directory
environment:
- REDIS_URL_SIDEKIQ=${REDIS_URL_SIDEKIQ}
networks:
- backend
My problem was I did not config my initializers/sidekiq.rb properly but even with the correct config, sidekiq was still not running enqueued jobs. I had to run spring stop on top of that and restarted everything and it solved my issue.
I encountered a similar problem wherein the logs would show entries such as INFO Rails : queueing TestWorker (TestWorker). However, the jobs would never get processed, and none of the answers in this question solved the issue.
The tl;dr to my solution is that Sidekiq's Testing Client was getting unexpectedly triggered.
I eventually deduced that there is some "magic" going on underneath the surface that makes it difficult to discretely determine where/when/how the above testing trigger was getting configured, based on the following anecdote...
Running bundle exec sidekiq -C config/sidekiq.yml -e development had the result that Sidekiq::Testing.fake? == true
However, running bundle exec sidekiq -C config/sidekiq.yml -e development_2 had the result that Sidekiq::Testing.fake? == false
^ The only difference between these 2 commands is that I renamed the development environment in sidekiq.yml to development_2, i.e. the same/equivalent environment was running with both commands (at least, presumably it would be the same environment if it wasn't for this inane "magic" under the hood).
I updated sidekiq.rb to explicitly toggle Sidekiq::Testing via the following:
sidekiq_testing_fake = false # set this using env var, etc.
if sidekiq_testing_fake
Sidekiq::Testing.fake!
elsif Sidekiq.constants.include?(:Testing)
Sidekiq::Testing.disable!
end
My issue was that I had both a redis-server running and Redis.app's redis-server running, I killed the redis-server (and kept the Redis.app one)
I've been experiencing an issue where capistrano will deploy an old revision from our git repo, unless we specify the exact revision we want to deploy to.
#this will deploy a revision from a couple weeks ago
cap staging deploy:migrations
#this will deploy correctly a new revision
cap staging deploy:migrations -S revision=74d27c00363cdcd456942d6951230564893ccb28
Does anyone have an idea why this could be happening?
here is the cap deploy file:
set :rvm_ruby_string, 'ruby-1.9.3-p194#ac_helenefrance_01' # Or:
#set :rvm_ruby_string, ENV['GEM_HOME'].gsub(/.*\//,"") # Read from local system
require "rvm/capistrano" # Load RVM's capistrano plugin.
require "bundler/capistrano"
# set :verbose ,1
require 'capistrano/ext/multistage'
set :stages, %w(staging production)
set :default_stage, "staging"
set :user, "webm"
# set :deploy_via, :remote_cache
set :use_sudo, false
set :scm, "git"
set :repository, "git#aliencom.beanstalkapp.com:/ac_helenefrance_01.git"
# :branch is being set in stage files
default_run_options[:pty] = true
# ssh_options[:forward_agent] = true
after "deploy", "deploy:cleanup" # keep only the last 5 releases
namespace :deploy do
%w[start stop restart].each do |command|
desc "#{command} unicorn server"
task command, roles: :app, except: {no_release: true} do
run "#{sudo} service unicorn_#{server_configuration} #{command}"
end
end
desc "build missing paperclip styles"
task :build_missing_paperclip_styles, :roles=> :app do
run "cd #{release_path}; RAILS_ENV=production bundle exec rake paperclip:refresh:missing_styles"
end
after "deploy:update", "deploy:build_missing_paperclip_styles"
task :setup_config, roles: :app do
puts "#making symlink to nginx sites-enabled"
run "#{sudo} ln -fs #{current_path}/config/server/#{server_configuration}/nginx.conf /etc/nginx/sites-enabled/#{server_configuration}"
puts "#making symlink to unicorn service script"
run "#{sudo} ln -fs #{current_path}/config/server/#{server_configuration}/unicorn_init.sh /etc/init.d/unicorn_#{server_configuration}"
puts "#making a the new config directory"
run "mkdir -p #{shared_path}/config"
run "sunique 1"
put File.read("config/database.yml"), "#{shared_path}/config/database.yml"
run "sunique 0"
puts "Now edit the config files in #{shared_path}."
end
after "deploy:setup", "deploy:setup_config"
task :symlink_config, roles: :app do
run "ln -nfs #{shared_path}/config/database.yml #{release_path}/config/database.yml"
puts "#for reference:"
puts "#rvm wrapper 1.9.3#ac_helenefrance_01 ruby-1.9.3-p194##{server_configuration} unicorn cap"
puts "#now be sure to run: sudo update-rc.d unicorn_#{server_configuration} defaults"
end
after "deploy:finalize_update", "deploy:symlink_config"
end
and the environment/stage.rb for the default multistage env.
server "xxx.xxx.xxx.xxx", :web, :app, :db, primary: true
set :branch, "sitemap"
set :isRemote, true
set :server_configuration, "st_ac_helenefrance_01"
set :application, "#{server_configuration}"
set :deploy_to, "/home/#{user}/#{server_configuration}"
It was setup to an old branch >:(
My colleague accidentally committed a test he was doing.