Ruby on Rails Capistrano Update release - ruby-on-rails-3

I'm confused about what to do to get Capistrano to update.
I've committed my changes to git. Well hell I'll write all the steps I took.
git commit -a
git push
(all files successfully pushed to the remote git repository. all changes noted)
cap deploy
But it doesn't deploy the latest version of the site.
In fact it deploys the oldest version.
Cap Deploy Response
triggering load callbacks
* 2013-07-13 17:09:08 executing `deploy:update'
** transaction: start
* 2013-07-13 17:09:08 executing `deploy:update_code'
executing locally: "git ls-remote ssh://ubuntu#54.229.78.34/~/liquid_admin.git master"
command finished in 3150ms
* executing "git clone -b master --depth 1 ssh://ubuntu#54.229.78.34/~/liquid_admin.git /home/ubuntu/liquid_admin/releases/20130713150911 && cd /home/ubuntu/liquid_admin/releases/20130713150911 && git checkout -b deploy d609108bf81df3cb558f7536c3cee98d852b4ec5 && git submodule init && git submodule sync && export GIT_RECURSIVE=$([ ! \"`git --version`\" \\< \"git version 1.6.5\" ] && echo --recursive) && git submodule update --init $GIT_RECURSIVE && rm -Rf /home/ubuntu/liquid_admin/releases/20130713150911/.git && (echo d609108bf81df3cb558f7536c3cee98d852b4ec5 > /home/ubuntu/liquid_admin/releases/20130713150911/REVISION)"
servers: ["54.229.78.34"]
[54.229.78.34] executing command
** [54.229.78.34 :: out] Cloning into '/home/ubuntu/liquid_admin/releases/20130713150911'...
** [54.229.78.34 :: out] remote: Counting objects: 276, done.
remote: Compressing objects: 1% (3/239)
** [54.229.78.34 :: out] remote: Compressing objects: 2% (5/239)
** [54.229.78.34 :: out] remote: Compressing objects: 3% (8/239)
** [54.229.78.34 :: out] remote: Compressing objects: 4% (10/239)
** [54.229.78.34 :: out] remote: Compressing objects: 5% (12/239)
** [54.229.78.34 :: out] remote: Compressing objects: 6% (15/239)
(then a hell of a lot more of those then...)
** [54.229.78.34 :: out] Resolving deltas: 100% (58/58), done.
** [54.229.78.34 :: out] Switched to a new branch 'deploy'
command finished in 5206ms
* 2013-07-13 17:09:19 executing `deploy:finalize_update'
triggering before callbacks for `deploy:finalize_update'
* 2013-07-13 17:09:19 executing `bundle:install'
* executing "cd /home/ubuntu/liquid_admin/releases/20130713150911 && bundle install --gemfile /home/ubuntu/liquid_admin/releases/20130713150911/Gemfile --path /home/ubuntu/liquid_admin/shared/bundle --deployment --quiet --without development test"
servers: ["54.229.78.34"]
[54.229.78.34] executing command
command finished in 2138ms
* executing "chmod -R -- g+w /home/ubuntu/liquid_admin/releases/20130713150911 && rm -rf -- /home/ubuntu/liquid_admin/releases/20130713150911/public/system && mkdir -p -- /home/ubuntu/liquid_admin/releases/20130713150911/public/ && ln -s -- /home/ubuntu/liquid_admin/shared/system /home/ubuntu/liquid_admin/releases/20130713150911/public/system && rm -rf -- /home/ubuntu/liquid_admin/releases/20130713150911/log && ln -s -- /home/ubuntu/liquid_admin/shared/log /home/ubuntu/liquid_admin/releases/20130713150911/log && rm -rf -- /home/ubuntu/liquid_admin/releases/20130713150911/tmp/pids && mkdir -p -- /home/ubuntu/liquid_admin/releases/20130713150911/tmp/ && ln -s -- /home/ubuntu/liquid_admin/shared/pids /home/ubuntu/liquid_admin/releases/20130713150911/tmp/pids"
servers: ["54.229.78.34"]
[54.229.78.34] executing command
command finished in 756ms
* executing "find /home/ubuntu/liquid_admin/releases/20130713150911/public/images /home/ubuntu/liquid_admin/releases/20130713150911/public/stylesheets /home/ubuntu/liquid_admin/releases/20130713150911/public/javascripts -exec touch -t 201307131509.22 -- {} ';'; true"
servers: ["54.229.78.34"]
[54.229.78.34] executing command
** [out :: 54.229.78.34] find:
** [out :: 54.229.78.34] `/home/ubuntu/liquid_admin/releases/20130713150911/public/images'
** [out :: 54.229.78.34] : No such file or directory
** [out :: 54.229.78.34]
** [out :: 54.229.78.34] find:
** [out :: 54.229.78.34] `/home/ubuntu/liquid_admin/releases/20130713150911/public/stylesheets'
** [out :: 54.229.78.34] : No such file or directory
** [out :: 54.229.78.34]
** [out :: 54.229.78.34] find:
** [out :: 54.229.78.34] `/home/ubuntu/liquid_admin/releases/20130713150911/public/javascripts'
** [out :: 54.229.78.34] : No such file or directory
** [out :: 54.229.78.34]
command finished in 767ms
* 2013-07-13 17:09:23 executing `deploy:create_symlink'
* executing "sudo -p 'sudo password: ' rm -f /home/ubuntu/liquid_admin/current && sudo -p 'sudo password: ' ln -s /home/ubuntu/liquid_admin/releases/20130713150911 /home/ubuntu/liquid_admin/current"
servers: ["54.229.78.34"]
[54.229.78.34] executing command
command finished in 837ms
** transaction: commit
UPDATE
I did "cap deploy:update" and it updated some of the files. For example my database.yml was updated. But none of the new views, new controllers, or new models are there...
UPDATE 2
It seems to have only changed files that existed during my first deployment. So "posts" and "home" and all that is changed... but any new controllers, models, or views that I made after that were not deployed.

Cap deploy is generally deploying the master.. Did you make changes in a different branch and forgot about merging it back?

Related

Openstack install fail on RabbitMQ - Centos8

I am following this post to install Packstack on my Centos8 server. Everything goes fine until I reach this install step - "packstack --answer-file /root/openstack-answer.txt". Here is the error;
...
...
Copying Puppet modules and manifests [ DONE ]
Applying 192.168.168.171_controller.pp
192.168.168.171_controller.pp: [ ERROR ]
Applying Puppet manifests [ ERROR ]
ERROR : Error appeared during Puppet run: 192.168.168.171_controller.pp
Error: Facter: error while resolving custom fact "rabbitmq_nodename": undefined method `[]' for nil:NilClass
You will find full trace in log /var/tmp/packstack/20210515-120855-k817cwco/manifests/192.168.168.171_controller.pp.log
Please check log file /var/tmp/packstack/20210515-120855-k817cwco/openstack-setup.log for more information
Additional information:
* Parameter CONFIG_NEUTRON_L2_AGENT: You have chosen OVN Neutron backend. Note that this backend does not support the VPNaaS or FWaaS services. Geneve will be used as the encapsulation method for tenant networks
* Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components.
* File /root/keystonerc_admin has been created on OpenStack client host 192.168.168.171. To use the command line tools you need to source the file.
* To access the OpenStack Dashboard browse to http://192.168.168.171/dashboard .
Please, find your login credentials stored in the keystonerc_admin in your home directory.
Here is the openstack-setup.log
2021-05-15 12:08:56::INFO::shell::100::root:: [localhost] Executing script:
rm -rf /var/tmp/packstack/20210515-120855-k817cwco/manifests/*pp
2021-05-15 12:08:56::INFO::shell::100::root:: [localhost] Executing script:
mkdir -p ~/.ssh
chmod 500 ~/.ssh
grep 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCn8amY2BL11DJlLFjnAgxseuUag93JnVXxmnUpiEvKC2GfYcMq6fEjdqlj5be70V1LRRP4dlHkp2HhkM3dWsp/sDVLUGJIXqwmI08QiEuW7JR35pfnATTf+aw2FgRf/0yvR4uH9oWXw2R909ZEPdqcpD8T72Cz4rAcJjWA3IdWilOIGGxCs3yLN7t2v7RAaIHwEsURiI8DWRo4LcvwMw1dMhd2S4HvFu98uv7Nqd16rdlWR3QpJHZFK/4JLxWtK/7/Bf/o4RFKNlOH+mRmRlaxiT1O//zlKglUtMY/YkhbUhrMGB/jJSq6sSRlyxeLHrhrT3V4AbChH56jEMDOXnGL07FFHvVtWzJv0chyEL1Dav7Ua8N1QfoaHcfskem0rWXgtCs3QZjQWde7rFSGRg1/7cQpb51n9ZdXZagPHhLRNNI/eTKA5C2ed8p/KK1S00PNHSub4BP8Jsw5eVhUZAjZG38YfS536tORo0ciYj42dkAAVIWI44X8psU8BirQotU= root#openstack.thomsoncodes.com' ~/.ssh/authorized_keys > /dev/null 2>&1 || echo ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCn8amY2BL11DJlLFjnAgxseuUag93JnVXxmnUpiEvKC2GfYcMq6fEjdqlj5be70V1LRRP4dlHkp2HhkM3dWsp/sDVLUGJIXqwmI08QiEuW7JR35pfnATTf+aw2FgRf/0yvR4uH9oWXw2R909ZEPdqcpD8T72Cz4rAcJjWA3IdWilOIGGxCs3yLN7t2v7RAaIHwEsURiI8DWRo4LcvwMw1dMhd2S4HvFu98uv7Nqd16rdlWR3QpJHZFK/4JLxWtK/7/Bf/o4RFKNlOH+mRmRlaxiT1O//zlKglUtMY/YkhbUhrMGB/jJSq6sSRlyxeLHrhrT3V4AbChH56jEMDOXnGL07FFHvVtWzJv0chyEL1Dav7Ua8N1QfoaHcfskem0rWXgtCs3QZjQWde7rFSGRg1/7cQpb51n9ZdXZagPHhLRNNI/eTKA5C2ed8p/KK1S00PNHSub4BP8Jsw5eVhUZAjZG38YfS536tORo0ciYj42dkAAVIWI44X8psU8BirQotU= root#openstack.thomsoncodes.com >> ~/.ssh/authorized_keys
chmod 400 ~/.ssh/authorized_keys
restorecon -r ~/.ssh
2021-05-15 12:08:56::INFO::shell::100::root:: [192.168.168.171] Executing script:
rpm -q --whatprovides yum-utils || yum install -y yum-utils
2021-05-15 12:08:56::INFO::shell::49::root:: Executing command:
rpm -qa --qf='%{name}-%{version}-%{release}.%{arch}
' | grep centos-release-openstack
2021-05-15 12:09:10::INFO::shell::100::root:: [192.168.168.171] Executing script:
(rpm -q 'centos-release-openstack-ussuri' || yum -y install centos-release-openstack-ussuri) || true
2021-05-15 12:09:10::INFO::shell::49::root:: Executing command:
rpm -q rdo-release --qf='%{version}-%{release}.%{arch}
'
2021-05-15 12:09:10::INFO::shell::100::root:: [192.168.168.171] Executing script:
rpm -q --whatprovides yum-utils || yum install -y yum-utils
yum clean metadata
2021-05-15 12:09:11::INFO::shell::100::root:: [192.168.168.171] Executing script:
yum install -y puppet hiera openssh-clients tar nc rubygem-json
yum update -y puppet hiera openssh-clients tar nc rubygem-json
rpm -q --whatprovides puppet
rpm -q --whatprovides hiera
rpm -q --whatprovides openssh-clients
rpm -q --whatprovides tar
rpm -q --whatprovides nc
rpm -q --whatprovides rubygem-json
2021-05-15 12:09:38::INFO::shell::100::root:: [192.168.168.171] Executing script:
mkdir -p /var/tmp/packstack
mkdir --mode 0700 /var/tmp/packstack/18227dca781e48cda2db45952d159190
mkdir --mode 0700 /var/tmp/packstack/18227dca781e48cda2db45952d159190/modules
mkdir --mode 0700 /var/tmp/packstack/18227dca781e48cda2db45952d159190/resources
2021-05-15 12:09:38::INFO::shell::100::root:: [192.168.168.171] Executing script:
facter -p
2021-05-15 12:09:42::INFO::shell::100::root:: [192.168.168.171] Executing script:
[[ -f /etc/hiera.yaml ]] && [[ ! -L /etc/puppet/hiera.yaml ]] && ln -s /etc/hiera.yaml /etc/puppet/hiera.yaml || echo "skipping creation of hiera.yaml symlink"
sed -i 's;:datadir:.*;:datadir: /var/tmp/packstack/18227dca781e48cda2db45952d159190/hieradata;g' $(puppet config print hiera_config)
2021-05-15 12:09:43::INFO::shell::100::root:: [192.168.168.171] Executing script:
vgdisplay cinder-volumes
2021-05-15 12:09:43::INFO::shell::100::root:: [localhost] Executing script:
ssh-keygen -t rsa -b 2048 -f "/var/tmp/packstack/20210515-120855-k817cwco/nova_migration_key" -N ""
2021-05-15 12:09:43::INFO::shell::100::root:: [localhost] Executing script:
ssh-keyscan 192.168.168.171
2021-05-15 12:09:43::INFO::shell::100::root:: [192.168.168.171] Executing script:
systemctl
2021-05-15 12:09:43::INFO::shell::100::root:: [192.168.168.171] Executing script:
systemctl is-enabled NetworkManager
2021-05-15 12:09:44::INFO::shell::100::root:: [192.168.168.171] Executing script:
systemctl is-active NetworkManager
2021-05-15 12:09:44::INFO::shell::100::root:: [192.168.168.171] Executing script:
echo $HOME
2021-05-15 12:09:44::INFO::shell::100::root:: [localhost] Executing script:
cd /var/tmp/packstack/20210515-120855-k817cwco/hieradata
tar --dereference -cpzf - ../hieradata | ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root#192.168.168.171 tar -C /var/tmp/packstack/18227dca781e48cda2db45952d159190 -xpzf -
cd /usr/lib/python3.6/site-packages/packstack/puppet
cd /var/tmp/packstack/20210515-120855-k817cwco/manifests
tar --dereference -cpzf - ../manifests | ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root#192.168.168.171 tar -C /var/tmp/packstack/18227dca781e48cda2db45952d159190 -xpzf -
cd /usr/share/openstack-puppet/modules
tar --dereference -cpzf - aodh apache ceilometer certmonger cinder concat firewall glance gnocchi heat horizon inifile ironic keystone magnum manila memcached mysql neutron nova nssdb openstack openstacklib oslo ovn packstack panko placement rabbitmq redis remote rsync sahara ssh stdlib swift sysctl systemd tempest trove vcsrepo vswitch xinetd | ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root#192.168.168.171 tar -C /var/tmp/packstack/18227dca781e48cda2db45952d159190/modules -xpzf -
2021-05-15 12:25:43::ERROR::run_setup::1062::root:: Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/packstack/installer/run_setup.py", line 1057, in main
_main(options, confFile, logFile)
File "/usr/lib/python3.6/site-packages/packstack/installer/run_setup.py", line 681, in _main
runSequences()
File "/usr/lib/python3.6/site-packages/packstack/installer/run_setup.py", line 648, in runSequences
controller.runAllSequences()
File "/usr/lib/python3.6/site-packages/packstack/installer/setup_controller.py", line 81, in runAllSequences
sequence.run(config=self.CONF, messages=self.MESSAGES)
File "/usr/lib/python3.6/site-packages/packstack/installer/core/sequences.py", line 109, in run
step.run(config=config, messages=messages)
File "/usr/lib/python3.6/site-packages/packstack/installer/core/sequences.py", line 50, in run
self.function(config, messages)
File "/usr/lib/python3.6/site-packages/packstack/plugins/puppet_950.py", line 215, in apply_puppet_manifest
wait_for_puppet(currently_running, messages)
File "/usr/lib/python3.6/site-packages/packstack/plugins/puppet_950.py", line 128, in wait_for_puppet
validate_logfile(log)
File "/usr/lib/python3.6/site-packages/packstack/modules/puppet.py", line 107, in validate_logfile
raise PuppetError(message)
packstack.installer.exceptions.PuppetError: Error appeared during Puppet run: 192.168.168.171_controller.pp
Error: Facter: error while resolving custom fact "rabbitmq_nodename": undefined method `[]' for nil:NilClass
You will find full trace in log /var/tmp/packstack/20210515-120855-k817cwco/manifests/192.168.168.171_controller.pp.log
2021-05-15 12:25:43::INFO::shell::100::root:: [192.168.168.171] Executing script:
rm -rf /var/tmp/packstack/18227dca781e48cda2db45952d159190
Here is the controller.pp.log
Error: Facter: error while resolving custom fact "rabbitmq_nodename": undefined method `[]' for nil:NilClass
Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/6.14/deprecated_language.html
(file & line not available)
Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
(file: /etc/puppet/hiera.yaml)
...
...
Notice: /Stage[main]/Swift/Swift_config[swift-hash/swift_hash_path_suffix]/value: value changed 2399ecebcf7a4128 to 00a7d595320749e9
Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/password]/value: value changed dc6fbb7c617a48c0 to e2187def7d184d58
Error: Systemd start for rabbitmq-server failed!
journalctl log for rabbitmq-server:
-- Logs begin at Sat 2021-05-15 11:54:15 CDT, end at Sat 2021-05-15 12:18:53 CDT. --
May 15 12:18:23 openstack systemd[1]: Starting RabbitMQ broker...
May 15 12:18:23 openstack rabbitmq-server[11773]: 2021-05-15 12:18:23 [warning] Both old (.config) and new (.conf) format config files exist.
May 15 12:18:23 openstack rabbitmq-server[11773]: Using the old format config file: /etc/rabbitmq/rabbitmq.config
May 15 12:18:23 openstack rabbitmq-server[11773]: Please update your config files to the new format and remove the old file.
May 15 12:18:53 openstack rabbitmq-server[11773]: ERROR: epmd error for host openstack: timeout (timed out)
May 15 12:18:53 openstack systemd[1]: rabbitmq-server.service: Main process exited, code=exited, status=1/FAILURE
May 15 12:18:53 openstack systemd[1]: rabbitmq-server.service: Failed with result 'exit-code'.
May 15 12:18:53 openstack systemd[1]: Failed to start RabbitMQ broker.
Error: /Stage[main]/Rabbitmq::Service/Service[rabbitmq-server]/ensure: change from 'stopped' to 'running' failed: Systemd start for rabbitmq-server failed!
journalctl log for rabbitmq-server:
-- Logs begin at Sat 2021-05-15 11:54:15 CDT, end at Sat 2021-05-15 12:18:53 CDT. --
May 15 12:18:23 openstack systemd[1]: Starting RabbitMQ broker...
May 15 12:18:23 openstack rabbitmq-server[11773]: 2021-05-15 12:18:23 [warning] Both old (.config) and new (.conf) format config files exist.
May 15 12:18:23 openstack rabbitmq-server[11773]: Using the old format config file: /etc/rabbitmq/rabbitmq.config
May 15 12:18:23 openstack rabbitmq-server[11773]: Please update your config files to the new format and remove the old file.
May 15 12:18:53 openstack rabbitmq-server[11773]: ERROR: epmd error for host openstack: timeout (timed out)
May 15 12:18:53 openstack systemd[1]: rabbitmq-server.service: Main process exited, code=exited, status=1/FAILURE
May 15 12:18:53 openstack systemd[1]: rabbitmq-server.service: Failed with result 'exit-code'.
May 15 12:18:53 openstack systemd[1]: Failed to start RabbitMQ broker.
Notice: /Stage[main]/Swift::Deps/Anchor[swift::config::end]: Triggered 'refresh' from 2 events
Notice: /Stage[main]/Swift::Deps/Anchor[swift::service::begin]: Triggered 'refresh' from 2 events
Notice: /Stage[main]/Keystone::Deps/Anchor[keystone::config::end]: Triggered 'refresh' from 1 event
Notice: /Stage[main]/Keystone/Exec[keystone-manage fernet_setup]: Triggered 'refresh' from 1 event
Notice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_%]/Mysql_user[keystone_admin#%]/password_hash: changed password
Notice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_127.0.0.1]/Mysql_user[keystone_admin#127.0.0.1]/password_hash: changed password
Notice: /Stage[main]/Keystone::Deps/Anchor[keystone::db::end]: Triggered 'refresh' from 1 event
Notice: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::begin]: Triggered 'refresh' from 1 event
Notice: /Stage[main]/Keystone::Db::Sync/Exec[keystone-manage db_sync]: Triggered 'refresh' from 2 events
Notice: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::end]: Triggered 'refresh' from 1 event
Notice: /Stage[main]/Keystone::Bootstrap/Exec[keystone bootstrap]: Triggered 'refresh' from 1 event
Notice: /Stage[main]/Keystone::Deps/Anchor[keystone::service::begin]: Triggered 'refresh' from 4 events
Warning: /Stage[main]/Apache::Service/Service[httpd]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone::Deps/Anchor[keystone::service::end]: Skipping because of failed dependencies
Warning: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::service::end]: Skipping because of failed dependencies
Warning: /Stage[main]/Aodh::Deps/Anchor[aodh::service::end]: Skipping because of failed dependencies
Warning: /Stage[main]/Placement::Deps/Anchor[placement::service::end]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone::Cron::Fernet_rotate/Cron[keystone-manage fernet_rotate]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone/Keystone_domain[Default]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone/Exec[restart_keystone]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone/Anchor[default_domain_created]: Skipping because of failed dependencies
Warning: /Stage[main]/Packstack::Keystone/Keystone_role[_member_]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone::Bootstrap/Keystone_role[admin]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone::Bootstrap/Keystone_user[admin]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone::Bootstrap/Keystone_tenant[services]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone::Bootstrap/Keystone_tenant[admin]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone::Bootstrap/Keystone_user_role[admin#admin]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone::Bootstrap/Keystone_service[keystone::identity]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone::Bootstrap/Keystone_endpoint[RegionOne/keystone::identity]: Skipping because of failed dependencies
Warning: /Stage[main]/Horizon::Deps/Anchor[horizon::service::end]: Skipping because of failed dependencies
Warning: /Stage[main]/Swift::Keystone::Auth/Keystone_role[SwiftOperator]: Skipping because of failed dependencies
Warning: /Stage[main]/Swift::Keystone::Auth/Keystone_role[ResellerAdmin]: Skipping because of failed dependencies
Warning: /Stage[main]/Heat::Keystone::Auth/Keystone_role[heat_stack_user]: Skipping because of failed dependencies
Warning: /Stage[main]/Heat::Keystone::Auth/Keystone_role[heat_stack_owner]: Skipping because of failed dependencies
Warning: /Stage[main]/Heat::Keystone::Domain/Keystone_domain[heat]: Skipping because of failed dependencies
Warning: /Stage[main]/Heat::Keystone::Domain/Keystone_user[heat_admin::heat]: Skipping because of failed dependencies
Warning: /Stage[main]/Heat::Keystone::Domain/Keystone_user_role[heat_admin::heat#::heat]: Skipping because of failed dependencies
Warning: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_user[glance]: Skipping because of failed dependencies
Warning: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_user_role[glance#services]: Skipping because of failed dependencies
Warning: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_service[glance::image]: Skipping because of failed dependencies
Warning: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_endpoint[RegionOne/glance::image]: Skipping because of failed dependencies
Warning: /Stage[main]/Glance::Deps/Anchor[glance::service::begin]: Skipping because of failed dependencies
Warning: /Stage[main]/Glance::Api/Service[glance-api]: Skipping because of failed dependencies
Warning: /Stage[main]/Glance::Registry/Service[glance-registry]: Skipping because of failed dependencies
Warning: /Stage[main]/Glance::Deps/Anchor[glance::service::end]: Skipping because of failed dependencies
Warning: /Stage[main]/Cinder::Keystone::Auth/Keystone::Resource::Service_identity[cinder]/Keystone_user[cinder]: Skipping because of failed dependencies
Warning: /Stage[main]/Cinder::Keystone::Auth/Keystone::Resource::Service_identity[cinder]/Keystone_user_role[cinder#services]: Skipping because of failed dependencies
Warning: /Stage[main]/Cinder::Keystone::Auth/Keystone::Resource::Service_identity[cinderv2]/Keystone_service[cinderv2::volumev2]: Skipping because of failed dependencies
Warning: /Stage[main]/Cinder::Keystone::Auth/Keystone::Resource::Service_identity[cinderv2]/Keystone_endpoint[RegionOne/cinderv2::volumev2]: Skipping because of failed dependencies
Warning: /Stage[main]/Cinder::Keystone::Auth/Keystone::Resource::Service_identity[cinderv3]/Keystone_service[cinderv3::volumev3]: Skipping because of failed dependencies
Warning: /Stage[main]/Cinder::Keystone::Auth/Keystone::Resource::Service_identity[cinderv3]/Keystone_endpoint[RegionOne/cinderv3::volumev3]: Skipping because of failed dependencies
Warning: /Stage[main]/Cinder::Deps/Anchor[cinder::service::end]: Skipping because of failed dependencies
Error: Could not prefetch cinder_type provider 'openstack': Could not authenticate
Error: Failed to apply catalog: Could not authenticate
I could see that people have experienced the similar issues but those solutions did not work for me. I also have done below steps.
Verify the hostname and host file
Opened the port or disabled the firewalld
Disabled the SELINUX
I changed hostname to openstack and here is my host file
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
<my-ip> openstack
I am not sure is that a hostname issue or firewall or anything else. I have been struggling this for quite sometime and a help would be greately appreciated
I met the same question.Maybe you can modify your /etc/hosts and add your hostname in it not openstack.

Capistrano fails SSH public key authentication but all commands still succeed

I am having trouble deploying using Capistrano using public key authentication. On windows, I have it configured to start an SSH agent automatically when I open my terminal.
Agent pid 4476
Enter passphrase for /c/Users/Lea/.ssh/id_rsa:
Identity added: /c/Users/Lea/.ssh/id_rsa (/c/Users/Lea/.ssh/id_rsa)
id_rsa is in my authorized_keys file on the server, and I use it all the time to ssh into it using ssh lea#web.3.
My Capfile is as follows:
require 'rubygems'
require 'railsless-deploy'
# application name
set :application, "site.com"
# multi-stage deploy
task :production do
set :branch, "master"
set :app_environment, "production"
role :web, "web.3", :primary => true
set :deploy_to, "/var/www/vhosts/site/site.com/"
end
task :dev do
set :branch, `git rev-parse HEAD`
set :app_environment, "development"
role :web, "web.3", :primary => true
set :deploy_to, "/var/www/vhosts/site/dev.site.com/"
end
# deploys remotely on SSH using deploy only key
set :repository, "git#bitbucket.org:us/site.git"
set :scm, :git
set :git_enable_submodules, 1
set :deploy_via, :remote_cache
# release configuration
set :use_sudo, false
set :keep_releases, 2
after "deploy:update", "deploy:cleanup"
# the web server user
set :user, "lea"
namespace :deploy do
task :migrate do
# do nothing
end
task :finalize_update, :except => { :no_release => true } do
transaction do
#run "chmod -R g+w #{release_path}"
run "echo '#{app_environment}' > #{release_path}/ENVIRONMENT"
end
end
task :restart, :except => { :no_release => true } do
# don't need to restart
end
end
When I run the deployment, it asks again for my id_rsa passphrase. Why does it ask when I already have the ssh agent running and the passphrase entered?
Following is the log of the cap dev deploy command. You can see where it asks my passphrase. Also note when I ssh into the server, it starts an ssh-agent there as well and loads a deployment_rsa key used for git (you can see these messages in the log).
$ cap dev deploy
DL is deprecated, please use Fiddle
* 2013-09-12 13:19:30 executing `dev'
* 2013-09-12 13:19:30 executing `deploy'
* 2013-09-12 13:19:30 executing `deploy:update'
** transaction: start
* 2013-09-12 13:19:30 executing `deploy:update_code'
updating the cached checkout on all servers
* executing "if [ -d /var/www/vhosts/site/dev.site.com/shared/cache
d-copy ]; then cd /var/www/vhosts/site/dev.site.com/shared/cached-cop
y && git fetch -q origin && git fetch --tags -q origin && git reset -q --hard 33
09af4ac302a6c2dc46bcf36e877abbd8472988\\\n && git submodule -q init && git submo
dule -q sync && export GIT_RECURSIVE=$([ ! \"`git --version`\" \\< \"git version
1.6.5\" ] && echo --recursive) && git submodule -q update --init $GIT_RECURSIVE
&& git clean -q -d -x -f; else git clone -q git#bitbucket.org:us/v
entek.git /var/www/vhosts/site/dev.site.com/shared/cached-copy && cd
/var/www/vhosts/site/dev.site.com/shared/cached-copy && git checkout
-q -b deploy 3309af4ac302a6c2dc46bcf36e877abbd8472988 && git submodule -q init &
& git submodule -q sync && export GIT_RECURSIVE=$([ ! \"`git --version`\" \\< \"
git version 1.6.5\" ] && echo --recursive) && git submodule -q update --init $GI
T_RECURSIVE; fi"
servers: ["web.3"]
Enter passphrase for c:/Users/Lea/.ssh/id_rsa:
[web.3] executing command
** [web.3 :: out] Agent pid 11336
** [web.3 :: err] Identity added: /home/lea/.ssh/deployment_rsa (/home/lea/.ssh
/deployment_rsa)
command finished in 2300ms
copying the cached version to /var/www/vhosts/site/dev.site.com/r
eleases/20130912191939
* executing "cp -RPp /var/www/vhosts/site/dev.site.com/shared/cache
d-copy /var/www/vhosts/site/dev.site.com/releases/20130912191939 && (
echo 3309af4ac302a6c2dc46bcf36e877abbd8472988\\\n > /var/www/vhosts/us/dev.site.com/releases/20130912191939/REVISION)"
servers: ["web.3"]
[web.3] executing command
** [out :: web.3] Agent pid 11442
*** [err :: web.3] Identity added: /home/lea/.ssh/deployment_rsa (/home/lea/.ssh
/deployment_rsa)
command finished in 751ms
* 2013-09-12 13:19:39 executing `deploy:finalize_update'
* executing "echo 'development' > /var/www/vhosts/site/dev.site.com
/releases/20130912191939/ENVIRONMENT"
servers: ["web.3"]
[web.3] executing command
** [out :: web.3] Agent pid 11451
*** [err :: web.3] Identity added: /home/lea/.ssh/deployment_rsa (/home/lea/.ssh
/deployment_rsa)
command finished in 610ms
* 2013-09-12 13:19:40 executing `deploy:create_symlink'
* executing "rm -f /var/www/vhosts/site/dev.site.com/current && ln
-s /var/www/vhosts/site/dev.site.com/releases/20130912191939 /var/www
/vhosts/site/dev.site.com/current"
servers: ["web.3"]
[web.3] executing command
** [out :: web.3] Agent pid 11460
*** [err :: web.3] Identity added: /home/lea/.ssh/deployment_rsa (/home/lea/.ssh
/deployment_rsa)
command finished in 621ms
** transaction: commit
triggering after callbacks for `deploy:update'
* 2013-09-12 13:19:41 executing `deploy:cleanup'
* executing "ls -xt /var/www/vhosts/site/dev.site.com/releases"
servers: ["web.3"]
[web.3] executing command
[err :: web.3] Identity added: /home/lea/.ssh/deployment_rsa (/home/lea/.ssh/dep
loyment_rsa)
command finished in 1186ms
** keeping 2 of 7 deployed releases
* executing "rm -rf /var/www/vhosts/site/dev.site.com/releases/2013
0906181120 /var/www/vhosts/site/dev.site.com/releases/20130912185329
/var/www/vhosts/site/dev.site.com/releases/20130912185937 /var/www/vhosts/site/dev.site.com/releases/20130912191939 /var/www/vhosts/us/dev.site.com/releases/11469"
servers: ["web.3"]
[web.3] executing command
** [out :: web.3] Agent pid 11476
*** [err :: web.3] Identity added: /home/lea/.ssh/deployment_rsa (/home/lea/.ssh
/deployment_rsa)
command finished in 750ms
$
Now, my major problem is not with the passphrase. Every time I run capistrano it fails to authenticate 2 times for every deployment. I see this in the ssh log on the server, but no indication is given in Capistrano:
11:58:44 web3 sshd[1134]: Failed password for lea from [ip] port 42421 ssh2
11:58:56 web3 sshd[1134]: Failed password for lea from [ip] port 42421 ssh2
The server is running fail2ban which blocks my IP (for 10 minutes) after 5 failed authentications, meaning I get blocked out after running capistrano 3 times. This is a huge an unacceptable problem, and I have no idea why this would occur. Do you have any advice for how to troubleshoot this problem, or a solution?
Thanks!
I ended up solving this problem myself. I was being locked out of the server because Fail2ban was an old version.
When connecting to SSH, sshd does a reverse DNS lookup. My reverse DNS on the office internet was failing, and it was printing an error into the /var/logs/secure log file.
Address x.x.x.x maps to server.domain.com, but this does not map back to the address - POSSIBLE BREAK-IN ATTEMPT!
Fail2ban was recognizing this as a failed connection, and blocking my IP because of it. It was never a problem when connecting manually because that is infrequent, but when Capistrano makes several connections in a row it was triggering it.
I used the info here: https://github.com/fail2ban/fail2ban/pull/64 to solve the problem by removing the regular expression from the fail2ban config file.
Step 1:
Do you really need a passphrase for your keys? This kind of risk today is mitigated by full disk encryption products or use of truecrypt-ed USB sticks. Less PITA, and still passes your security manager's best practices.
That said:
http://blog.blenderbox.com/2013/02/20/ssh-agent-forwarding-with-github/
Try adding
ssh_options[:forward_agent] = true
to the capfile, not Deploy.rb

Capistrano deploy fails with rvm-shell saying rvm not being found

Recently I inherited a Rails application that has been deployed to production many times. I have previously deployed to a staging environment. Now, it fails to deploy to either. However, another Rails application that deploys to the same servers with the same account successfully deploys using rvm and capistrano.
I am receiving the following error:
  * executing "if [ -d /path/to/app/shared/cached-copy ]; then svn switch -q --username svnusername --password <filtered> --no-auth-cache  -r111111 https://svn.server.local/svn/projects/app/trunk /path/to/app/shared/cached-copy; else svn checkout -q --username svnusername --password <filtered> --no-auth-cache  -r111111 https://svn.server.local/svn/projects/app/trunk /path/to/app/shared/cached-copy; fi"
    servers: ["myserver-prod01.private.local"]
    [myserver-prod01.private.local] executing command
 ** [myserver-prod01.private.local:: out]
 ** [myserver-prod01.private.local:: out] $rvm_path (/home/appuser/.rvm/) does not exist.
 ** [myserver-prod01.private.local:: out] /usr/local/rvm/scripts/rvm: line 174: rvm_is_a_shell_function: command not found
 ** [myserver-prod01.private.local:: out] /usr/local/rvm/scripts/rvm: line 185: __rvm_teardown: command not found
 ** [myserver-prod01.private.local:: out] /usr/local/rvm/bin/rvm-shell: line 83: rvm: command not found
 ** [myserver-prod01.private.local:: out] Error: RVM was unable to use 'ruby-1.9.3-current#appuser'
    command finished in 554ms
*** [deploy:update_code] rolling back
  * executing "rm -rf /path/to/app/releases/20130425150643; true"
    servers: ["myserver-prod01.private.local"]
    [myserver-prod01.private.local] executing command
 ** [out :: myserver-prod01.private.local] 
 ** [out :: myserver-prod01.private.local] $rvm_path (/home/appuser/.rvm/) does not exist.
 ** [out :: myserver-prod01.private.local] /usr/local/rvm/scripts/rvm: line 174: rvm_is_a_shell_function: command not found
 ** [out :: myserver-prod01.private.local] /usr/local/rvm/scripts/rvm: line 185: __rvm_teardown: command not found
 ** [out :: myserver-prod01.private.local] /usr/local/rvm/bin/rvm-shell: line 83: rvm: command not found
 ** [out :: myserver-prod01.private.local] Error: RVM was unable to use 'ruby-1.9.3-current#appuser'
    command finished in 209ms
 ** [deploy:update_code] exception while rolling back: Capistrano::CommandError, failed: "env PATH=/opt/toolkit/extra-dev-current/root/usr/bin:$PATH:/usr/database/bin LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 RAILS_ENV=production rvm_path=$HOME/.rvm/ /usr/local/rvm/bin/rvm-shell 'ruby-1.9.3-current#appuser' -c 'rm -rf /path/to/app/releases/20130425150643; true'" on myserver-prod01.private.local
failed: "env PATH=/opt/toolkit/extra-dev-current/root/usr/bin:$PATH:/usr/database/bin LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 RAILS_ENV=production rvm_path=$HOME/.rvm/ /usr/local/rvm/bin/rvm-shell 'ruby-1.9.3-current#appuser' -c 'if [ -d /path/to/app/shared/cached-copy ]; then svn switch -q --username svnusername --password <filtered> --no-auth-cache  -r111111 https://svn.server.local/svn/projects/app/trunk /path/to/app/shared/cached-copy; else svn checkout -q --username svnusername --password <filtered> --no-auth-cache  -r111111 https://svn.server.local/svn/projects/app/trunk /path/to/app/shared/cached-copy; fi'" on myserver-prod01.private.local
I have checked the server. RVM is installed and working.
This is an rvm/rvm-capistrano version mismatch. Check the version of rvm installed on the server and compare to the version of rvm-capistrano installed with bundle. If your server has rvm 1.18.x, then lock the version of rvm in your Gemfile to 1.2.x. rvm-capistrano 1.3.x requires rvm 1.19.x.
I am answering my own question because I had to figure this out the hard way. It is obvious in retrospect, but not when you first see the error. This is a case of failure to specify product versions in the Gemfile. Generally, our extensive test suite catches such problems, but our test suite does not cover deploy so we missed this one until it was too late.

Rubber::Util.has_asset_pipeline? method missing

I think we updated rubber a few days back to 1.15.0 this problem now shows up in deploy.rb . Here is the block:
if Rubber::Util.has_asset_pipeline?
# load asset pipeline tasks, and reorder them to run after
# rubber:config so that database.yml/etc has been generated
load 'deploy/assets'
callbacks[:after].delete_if {|c| c.source == "deploy:assets:precompile"}
callbacks[:before].delete_if {|c| c.source == "deploy:assets:symlink"}
before "deploy:assets:precompile", "deploy:assets:symlink"
after "rubber:config", "deploy:assets:precompile"
end
The problem is when I comment it out the deploy fails saying:
* executing "sudo -p 'sudo password: ' bash -l -c 'cd /mnt/foo-production/current && RUBBER_ENV=\"production\" RAILS_ENV=\"production\" bundle exec rake rubber:config'"
servers: ["app01.foo.com"]
[app01.foo.com] executing command
** [out :: app01.foo.com] rake aborted!
** [out :: app01.foo.com] Don't know how to build task 'rubber:config'
** [out :: app01.foo.com]
** [out :: app01.foo.com] (See full trace by running task with --trace)
command finished in 17128ms
failed: "/bin/bash -l -c 'sudo -p '\\''sudo password: '\\'' bash -l -c '\\''cd /mnt/foo-production/current && RUBBER_ENV=\"production\" RAILS_ENV=\"production\" bundle exec rake rubber:config'\\'''" on app01.foo.com
Any idea why? Thanks!

deploy can't find asset folders in the public dir

I'm using ror 3.1 rc4, somehow when I deploy into a production server, the directories for images, stylesheets, and javascript are not found, and deployment fails. I do have the necessary code in deploy.rb
namespace :deploy do
task :start do ; end
task :stop do ; end
desc "Restarting mod_rails with restart.txt"
task :restart, :roles => :app, :except => { :no_release => true } do
run "touch #{current_path}/tmp/restart.txt"
end
task :precompile do
run "cd #{release_path}; RAILS_ENV=production rake assets:precompile"
end
end
after 'deploy:update_code', 'deploy:precompile'
And here is the error I get
executing "find /var/www/nattyvelo/releases/20110624033801/public/images /var/www/nattyvelo/releases/20110624033801/public/stylesheets /var/www/nattyvelo/releases/20110624033801/public/javascripts -exec touch -t 201106240338.03 {} ';'; true"
servers: ["66.228.39.243"]
[66.228.39.243] executing command
** [out :: 66.228.39.243] find: `/var/www/nattyvelo/releases/20110624033801/public/images'
** [out :: 66.228.39.243] : No such file or directory
** [out :: 66.228.39.243] find: `/var/www/nattyvelo/releases/20110624033801/public/stylesheets'
** [out :: 66.228.39.243] : No such file or directory
** [out :: 66.228.39.243] find: `/var/www/nattyvelo/releases/20110624033801/public/javascripts'
** [out :: 66.228.39.243] : No such file or directory
command finished in 705ms
triggering after callbacks for `deploy:update_code'
* executing `bundle:install'
* executing "ls -x /var/www/nattyvelo/releases"
servers: ["66.228.39.243"]
[66.228.39.243] executing command
command finished in 595ms
* executing "cd /var/www/nattyvelo/releases/20110624033801 && bundle install --gemfile /var/www/nattyvelo/releases/20110624033801/Gemfile --path /var/www/nattyvelo/shared/bundle --deployment --quiet --without development test"
servers: ["66.228.39.243"]
[66.228.39.243] executing command
** [out :: 66.228.39.243] bash: bundle: command not found
command finished in 604ms
*** [deploy:update_code] rolling back
* executing "rm -rf /var/www/nattyvelo/releases/20110624033801; true"
There's two errors happening here.
The first is that there is no longer a public/images, public/stylesheets or public/javascripts folder within a Rails 3.1 application. They have all been moved into app/assets. However, if you run rake assets:precompile then there will be a public/assets folder. This is where the static assets for your application will be served out of.
Whatever it is in your deploy script that is referencing these three folders needs to stop doing so or otherwise you'll continue to get this error.
The second error is that, just like the two other people before me have kind of suggested, you need to have the Bundler gem installed on the server.
You have a PATH problem by the look of it:
** [out :: 66.228.39.243] bash: bundle: command not found
You need to fix your PATH environment variable.
You probably have to install bundler in the production server.
sudo gem install bundler