Capistrano Connection Issue - ruby-on-rails-3

I am trying to run cap deploy:setup in my rails project directory, and I am getting this error.
* 2013-07-06 02:46:14 executing `deploy:setup'
* executing multiple commands in parallel
-> "else" :: "sudo -p 'sudo password: ' mkdir -p /var/www /var/www/releases /var/www/shared /var/www/shared/system /var/www/shared/log /var/www/shared/pids"
-> "else" :: "sudo -p 'sudo password: ' mkdir -p /var/www /var/www/releases /var/www/shared /var/www/shared/system /var/www/shared/log /var/www/shared/pids"
-> "else" :: "sudo -p 'sudo password: ' mkdir -p /var/www /var/www/releases /var/www/shared /var/www/shared/system /var/www/shared/log /var/www/shared/pids"
-> "else" :: "sudo -p 'sudo password: ' mkdir -p /var/www /var/www/releases /var/www/shared /var/www/shared/system /var/www/shared/log /var/www/shared/pids"
-> "else" :: "sudo -p 'sudo password: ' mkdir -p /var/www /var/www/releases /var/www/shared /var/www/shared/system /var/www/shared/log /var/www/shared/pids"
servers: ["your web-server here", "*web-address-from-capfile*", "your app-server here", "your primary db-server here", "your slave db-server here"]
connection failed for: your primary db-server here (SocketError: getaddrinfo: nodename nor servname provided, or not known), your web-server here (SocketError: getaddrinfo: nodename nor servname provided, or not known), your slave db-server here (SocketError: getaddrinfo: nodename nor servname provided, or not known), your app-server here (SocketError: getaddrinfo: nodename nor servname provided, or not known)
My Capfile is this:
load 'deploy'
# Uncomment if you are using Rails' asset pipeline
load 'deploy/assets'
load 'config/deploy' # remove this line to skip loading any of the default tasks
set :application, "myapp"
set :repository, "file://~/git/#{application}.git"
set :local_repository, "myserver:~/git/#{application}.git"
set :branch, "master"
set :scm, :git
set :deploy_to, "/var/www"
ssh_options[:forward_agent] = true
default_run_options[:pty] = true
set :user, "me"
ssh_options[:keys] = %w(~/.ssh/id_rsa)
set :port, 33333
server "example.com", :app, :web, :db, :primary => true
This error is really driving me crazy. It should be noted that I'm able to ssh into my server fine with my public/private key, and sshd is set to listen on the non-default port on the remote host (hence the set :port line).

So this was a silly mistake... I was editing the Capfile instead of config/deploy.rb in my rails project directory. The tutorial I was following at this link https://github.com/capistrano/capistrano/wiki/2.x-From-The-Beginning very clearly stated to edit config/deploy.rb, but because I was so used to Makefiles I dove right in to the Capfile without thinking.
As you'll notice in config/deploy.rb, these default lines are present.
role :web, "your web-server here" # Your HTTP server, Apache/etc
role :app, "your app-server here" # This may be the same as your `Web` server
role :db, "your primary db-server here", :primary => true # This is where Rails migrations will run
role :db, "your slave db-server here"
This is what leads to the posted issue, because of course "your web server here" isn't a connectable server.

Related

how to fix the issue "Failed to connect to the host via ssh" in ansible

when i execute ansible playbook from one server to other remote server i'm getting an error as
"msg": "Failed to connect to the host via ssh: ssh_askpass: exec(/usr/bin/ssh-askpass): No such file or directory\r\nHost key verification failed.", "unreachable": true"
blow is my play book
- hosts: igwcluster_AM:igwcluster_IS
become: true
become_method: sudo
gather_facts: True
tasks:
- name: Install Oracle Java 8
script:/data2/jenkins/workspace/PreReq_Install_To_Servers/IGW/IGW_Cluster/prereqs_Products/Java.sh
I'm using two host groups and each group has 2 servers.
Error log:
UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh_askpass: exec(/usr/bin/ssh-askpass): No such file or directory\r\nHost key verification failed.", "unreachable": true}
Note : I have tried with
host_key_checking = False
ssh_args = -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
But still it fails. please advise me on this
First of all you have to put space after "script:" and place script exactly under "name:" so it will look like that.
tasks:
- name: Install Oracle Java 8
script: /data2/jenkins/workspace/PreReq_Install_To_Servers/IGW/IGW_Clust/prereqs_Products/Java.sh
Try to use ssh key for ssh authorization.
On the server that you are execute ansible playbook from, generate ssh key if you didn't already, you can do it with simple command:
ssh-keygen
(press enter till command exit)
Next copy it to remote server by ssh copy id command:
ssh-copy-id <remote server IP/FQDN>
After this your ansible server will be able to connect to remote server without password prompt and this error should not appear.
If this method doesn't work for you please share this information:
hosts file
become user that you are using to run this playbook

How to enable password ssh authentication for Vagrant VM?

I'd like to enable password ssh authentication (and keep key-based authentication enabled) for may Vagrant VM. How to set that?
Vagrantfile:
Vagrant.configure("2") do |config|
config.vm.box = "fedora/26-cloud-base"
config.vm.box_version = "20170705"
config.ssh.username = 'vagrant'
config.ssh.password = 'a'
config.ssh.keys_only = false
end
$ sudo vagrant ssh-config
Host default
HostName 192.168.121.166
User vagrant
Port 22
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /home/jakub/src/kubernetes-vms/kubernetes/.vagrant/machines/default/libvirt/private_key
LogLevel FATAL
Password a is not accepted with this settings.
I guess the might be PasswordAuthentication no in output of vagrant ssh-config. How can that option be switched on?
On centos 7, using only below is not enough. By this way, I guess that it just make su vagrant become by password. I cannot find anything why below does not work in the official site.
Vagrant.configure("2") do |config|
config.vm.box = "centos/7"
config.ssh.username = 'vagrant'
config.ssh.password = 'vagrant'
config.ssh.insert_key = false
end
You should modify sshd_config manually.
config.vm.provision "shell", inline: <<-SHELL
sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
systemctl restart sshd.service
SHELL
For me the following works. You need to ssh to the vm as usual and then edit /etc/ssh/sshd_config. There you need to set PasswordAuthentication to yes instead of no. This will allow password authentication.
Vagrantfile:
Vagrant.configure("2") do |config|
config.vm.box = "fedora/26-cloud-base"
config.vm.box_version = "20170705"
config.vm.provision 'shell', inline: 'echo "vagrant:a" | chpasswd'
end
Line config.vm.provision 'shell', inline: 'echo "vagrant:a" | chpasswd' invokes shell provisioning that changes password of vagrant user (provided the box comes with predefined user called vagrant).
Then one can connect not only by vagrant ssh but also
ssh vagrant#<vm-ip>
If you want to force password authentication for the VM, you would need to set the following from your Vagrantfile
config.ssh.username = 'vagrant'
config.ssh.password = 'vagrant'
config.ssh.insert_key = false
You need to make sure the vagrant user in the VM has the corresponding password. I am not sure for the box you use so you'll need to verify yourself. It works for following box: ubuntu/trusty64
To ssh with password, this will automatically update the sshd config on debian/stretch64:
config.vm.provision "shell", inline: <<-SHELL
sed -i 's/ChallengeResponseAuthentication no/ChallengeResponseAuthentication yes/g' /etc/ssh/sshd_config
service ssh restart
SHELL
With the following you enable password ssh authentication for a linux VM and (if you wish) you can also set the password for the users vagrant and root
Vagrant.configure("2") do |config|
config.vm.box = "debian/bullseye64"
config.vm.provision "shell", inline: <<-'SHELL'
sed -i 's/^#* *\(PermitRootLogin\)\(.*\)$/\1 yes/' /etc/ssh/sshd_config
sed -i 's/^#* *\(PasswordAuthentication\)\(.*\)$/\1 yes/' /etc/ssh/sshd_config
systemctl restart sshd.service
echo -e "vagrant\nvagrant" | (passwd vagrant)
echo -e "root\nroot" | (passwd root)
SHELL
end

vagrant provision ssh issue

I have vagrant running scotchbox. I recently tried to add id_rsa.pub to /vagrant/home/.ssh hoping to be able to ssh in without entering password. Once I did that it acted the same so I removed it. Now I was adding another site to the configuration and now I can't do vagrant provision as it gives me the following error.
SSH authentication failed! This is typically caused by the
public/private keypair for the SSH user not being properly set on the
guest VM. Please verify that the guest VM is setup with the proper
public key, and that the private key path for Vagrant is setup
properly as well.
Here is what I get from vagrant ssh-config
HostName 127.0.0.1
User vagrant
Port 2222
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile "/Users/username/.vagrant.d/boxes/scotch-VAGRANTSLASH-box/3.0/virtualbox/vagrant_private_key"
IdentitiesOnly yes
LogLevel FATAL
Here is my vagrant file.
Vagrant.configure("2") do |config|
config.vm.box = "scotch/box"
config.vm.network "private_network", ip: "192.168.10.10"
config.vm.hostname = "scotchbox"
config.vm.synced_folder ".", "/var/www", :mount_options => ["dmode=775", "fmode=664"]
config.vm.provider "virtualbox" do |v|
v.memory = 2048
v.cpus = 2
end
# Optional NFS. Make sure to remove other synced_folder line too
#config.vm.synced_folder ".", "/var/www", :nfs => { :mount_options => ["dmode=777","fmode=666"] }
config.vm.provision "shell", inline: <<-SHELL
## Only thing you probably really care about is right here
DOMAINS=("site1.dev" "site2.dev" "site3.dev" "site4.dev" "site5.dev" "ai.d$
## Loop through all sites
for ((i=0; i < ${#DOMAINS[#]}; i++)); do
## Current Domain
DOMAIN=${DOMAINS[$i]}
echo "Creating directory for $DOMAIN..."
mkdir -p /var/www/$DOMAIN/public
echo "Creating vhost config for $DOMAIN..."
sudo cp /etc/apache2/sites-available/scotchbox.local.conf /etc/apache2/si$
echo "Updating vhost config for $DOMAIN..."
sudo sed -i s,scotchbox.local,$DOMAIN,g /etc/apache2/sites-available/$DOM$
sudo sed -i s,/var/www/public,/var/www/$DOMAIN/public,g /etc/apache2/site$
echo "Enabling $DOMAIN. Will probably tell you to restart Apache..."
sudo a2ensite $DOMAIN.conf
done
SHELL
end
I also get authentication error when doing vagrant up until it times out but box still starts and works and I can ssh into it with password.
I have looked through numerous other questions and have tried some things but nothing seemed to fix it. Ideally I want to ssh in using keys but would settle just to get back so I can provision it and have to login with password.
Thanks
I figured this out by setting the following in my Vagrantfile to use ssh key that vagrant created when it initialized.
config.ssh.private_key_path = "/pathtovagrantfolder/.vagrant/machines/default/virtualbox/private_key"

Ansible - establishing initial SSH connection

I am trying to copy an SSH public key to a newly created VM:
- hosts: vm1
remote_user: root
tasks:
- name: deploy ssh key to account
authorized_key: user='root' key="{{lookup('file','/root/.ssh/id_rsa.pub')}}"
But getting error:
fatal: [jenkins]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).\r\n", "unreachable": true}
So to establish SSH I need first to establish SSH?
How can I establish SSH for newly created KVM automatically, without manual key copy.
(host_key_checking = False in ancible.cfg)
Assuming the target machine allows root-login with password (from the error message it seems it does), you must provide the credentials to your playbook:
ansible-playbook playbook.yml --extra-vars "ansible_ssh_user=root ansible_ssh_pass=password"
Something I tried (and it worked) when I had this same issue:
ansible target-server-name -m command -a "whatever command" -k
The -k prompts you for the ssh password to the target server.
Add below changes to the /etc/ansible/hosts file:
[target-server-name]
target_server_ip
Example:
ansible target-server-name -m ping -k

pinging ec2 instance from ansible

I have an ec2 amazon linux running which I can ssh in to using:
ssh -i "keypair.pem" ec2-user#some-ip.eu-west-1.compute.amazonaws.com
but when I try to ping the server using ansible I get:
testserver | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
I'm using the following hosts file:
testserver ansible_ssh_host=some-ip.eu-west-1.compute.amazonaws.com ansible_ssh_user=ec2-user ansible_ssh_private_key_file=/Users/me/playbook/key-pair.pem
and running the following command to run ansible:
ansible testserver -i hosts -m ping -vvvvv
The output is:
<some-ip.eu-west-1.compute.amazonaws.com> ESTABLISH SSH CONNECTION FOR USER: ec2-user
<some-ip.eu-west-1.compute.amazonaws.com> SSH: ansible.cfg set ssh_args: (-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<some-ip.eu-west-1.compute.amazonaws.com> SSH: ANSIBLE_PRIVATE_KEY_FILE/private_key_file/ansible_ssh_private_key_file set: (-o)(IdentityFile="/Users/me/playbook/key-pair.pem")
<some-ip.eu-west-1.compute.amazonaws.com> SSH: ansible_password/ansible_ssh_pass not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<some-ip.eu-west-1.compute.amazonaws.com> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User=ec2-user)
<some-ip.eu-west-1.compute.amazonaws.com> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<some-ip.eu-west-1.compute.amazonaws.com> SSH: PlayContext set ssh_common_args: ()
<some-ip.eu-west-1.compute.amazonaws.com> SSH: PlayContext set ssh_extra_args: ()
<some-ip.eu-west-1.compute.amazonaws.com> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/Users/me/.ansible/cp/ansible-ssh-%h-%p-%r)
<some-ip.eu-west-1.compute.amazonaws.com> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/me/playbook/key-pair.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/Users/me/.ansible/cp/ansible-ssh-%h-%p-%r ec2-52-18-106-35.eu-west-1.compute.amazonaws.com '/bin/sh -c '"'"'( umask 22 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1462096401.65-214839021792201 `" && echo "` echo $HOME/.ansible/tmp/ansible-tmp-1462096401.65-214839021792201 `" )'"'"''
testserver | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
What am i doing wrong?
Try this Solution it worked fine for me
ansible ipaddress -m ping -i inventory -u ec2-user
where inventory is the host file name.
inventory :
[host]
xx.xx.xx.xx
[host:vars]
ansible_user=ec2-user
ansible_ssh_private_key_file=/location of your pem file/filename.pem
I was facing the problem as I didn't give the location of the host file I was referring to.
This is what my host file looks like.
[apache] is the group of hosts on which we are going to install apache server.
ansible_ssh_private_key_file should be the path of the dowloaded .pem file to access your instances. In my case both instances have same credentials.
[apache]
50.112.133.205 ansible_ssh_user=ubuntu
54.202.7.87 ansible_ssh_user=ubuntu
[apache:vars]
ansible_user=ubuntu
ansible_ssh_private_key_file=/home/hashimyousaf/Desktop/hashim_key_oregon.pem
I was having a similar problem, and reading throughTroubleshooting Connecting to Your Instance helped me. Specifically, I was pinging an Ubuntu instance from an Amazon-Linux instance but forgot to change the connection username from "ec2-user" to "ubuntu"!
You have to change the hosts file and make sure you have the correct username
test2 ansible_ssh_host=something.something.eu-west-1.compute.amazonaws.com ansible_ssh_user=theUser
'test2' - is the name I have give to the ssh machice on my local ansible hosts file
'ansible_ssh_host=something.something.eu-west-1.compute.amazonaws.com' - This is the connection to the ec2 instance
'ansible_ssh_user=theUser' - The user of the instance. (Important)
'ssh' into your instance
[theUser#Instance:] make sure you copy the 'theUser' into the hosts and place as the 'ansible_ssh_user' variable
then try to ping it.
If this does not work, check if you have rights to the ICMP packeting in the amazon aws enabled.
Worked for me ->
vi inventory
[hosts]
serveripaddress ansible_ssh_user=ec2-user
[hosts:vars]
ansible_user=ec2-user
ansible_ssh_private_key_file=/home/someuser/ansible1.pem
chmod 400 ansible1.pem
ansible -i inventory hosts -m ping -u ec2-user