Rsync error in Vagrant 2.2.3 (IPC code) when updating - ssh

I've an issue when updating a Vagrant box (Vagrant 2.2.3 and Windows 10).
The cause of error is rsync, it can't synchronize (so, my shared folders are not working, I think) :
Command: "rsync" "--verbose" "--archive" "--delete" "-z" "--copy-links" "--chmod=ugo=rwX" "--no-perms" "--no-owner" "--no-group" "--rsync-path" "sudo rsync" "-e" "ssh -p 2222 -o LogLevel=FATAL -o IdentitiesOnly=yes -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i 'C:/Users/my_user/boxes-puphpet/debian/.vagrant/machines/default/virtualbox/private_key'" "--exclude" ".vagrant/" "/cygdrive/c/Users/my_user/boxes-puphpet/debian/" "vagrant#127.0.0.1:/vagrant"
Error: rsync: pipe: Connection timed out (116)
rsync error: error in IPC code (code 14) at pipe.c(59) [sender=3.1.3]
INFO interface: Machine: error-exit ["Vagrant::Errors::RSyncError", "There was an error when attempting to rsync a synced folder.\nPlease inspect the error message below for more info.\n\nHost path: /cygdrive/c/Users/my_user/boxes-puphpet/debian/\nGuest path: /vagrant\nCommand: \"rsync\" \"--verbose\" \"--archive\" \"--delete\" \"-z\" \"--copy-links\" \"--chmod=ugo=rwX\" \"--no-perms\" \"--no-owner\" \"--no-group\" \"--rsync-path\" \"sudo rsync\" \"-e\" \"ssh -p 2222 -o LogLevel=FATAL -o IdentitiesOnly=yes -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i 'C:/Users/my_user/boxes-puphpet/debian/.vagrant/machines/default/virtualbox/private_key'\" \"--exclude\" \".vagrant/\" \"/cygdrive/c/Users/my_user/boxes-puphpet/debian/\" \"vagrant#127.0.0.1:/vagrant\"\nError: rsync: pipe: Connection timed out (116)\nrsync error: error in IPC code (code 14) at pipe.c(59) [sender=3.1.3]\n"]
Here my Vagranfile :
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "debian/jessie64"
config.vm.box_version = "8.10.0"
config.vm.network "private_network", ip: "192.168.56.222"
config.vm.synced_folder "C:/Users/f.pestre/www/debian.vm/www/", "/var/www"
config.vm.provider "virtualbox" do |vb|
vb.memory = "4048"
end
#config.vm.provision :shell, path: "bootstrap.sh"
end
I can login with vagrant ssh, but the sync folder doesn't work, at all.
Thanks.
F.

Add below to your vagrant file
config.vm.synced_folder '.', '/vagrant', disabled: true

Related

Ansible: Failed to connect to the host via ssh

I created a SSH key-pair using the following command:
ssh-keygen -t rsa -C "remote-user" -b 4096
I saved the key pair inside ~/.ssh directory of my local computer:
-rw------- 1 steve.rogers INTRA\users 3381 Jan 18 16:52 remote-user
-rw------- 1 steve.rogers INTRA\users 742 Jan 18 16:52 remote-user.pub
I have an instance in GCP and I have added the above public_key for the user remote_user to it.
After that I tried to SSH into the instance using the following command:
ssh -i ~/.ssh/remote-user remote-user#<gcp-instance-ip>
I was successfully able to ssh into the machine.
After that I tried to execute my playbook:
ansible-playbook setup.yaml --tags "mytag" --extra-vars "env=stg" -i /environments/stg/hosts
The execution did not succeed and ended up with the following error:
<gcp-server102> ESTABLISH SSH CONNECTION FOR USER: None
<gcp-server102> SSH: EXEC ssh -o ControlPersist=15m -F ssh.config -q -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/Users/steve.rogers/.ansible/cp/7be59d33ab gcp-server102 '/bin/sh -c '"'"'echo ~ && sleep 0'"'"''
Read vars_file './environments/{{env}}/group_vars/all.yaml'
<gcp-server101> ESTABLISH SSH CONNECTION FOR USER: None
<gcp-server101> SSH: EXEC ssh -o ControlPersist=15m -F ssh.config -q -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/Users/steve.rogers/.ansible/cp/6a68673873 gcp-server101 '/bin/sh -c '"'"'echo ~ && sleep 0'"'"''
Read vars_file './environments/{{env}}/group_vars/all.yaml'
<gcp-server201> ESTABLISH SSH CONNECTION FOR USER: None
<gcp-server201> SSH: EXEC ssh -o ControlPersist=15m -F ssh.config -q -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/Users/steve.rogers/.ansible/cp/e330878269 gcp-server201 '/bin/sh -c '"'"'echo ~ && sleep 0'"'"''
<gcp-server202> ESTABLISH SSH CONNECTION FOR USER: None
<gcp-server202> SSH: EXEC ssh -o ControlPersist=15m -F ssh.config -q -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/Users/steve.rogers/.ansible/cp/6f7ebc0471 gcp-server202 '/bin/sh -c '"'"'echo ~ && sleep 0'"'"''
<gcp-server102> (255, b'', b'remote-user#<gcp-instance-ip>: Permission denied (publickey).\r\n')
<gcp-server101> (255, b'', b'remote-user#<gcp-instance-ip>: Permission denied (publickey).\r\n')
fatal: [gcp-server102]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: remote-user#<gcp-instance-ip>: Permission denied (publickey).",
"unreachable": true
}
fatal: [gcp-server101]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: remote-user#<gcp-instance-ip>: Permission denied (publickey).",
"unreachable": true
}
<gcp-server202> (255, b'', b'remote-user#<gcp-instance-ip>: Permission denied (publickey).\r\n')
<gcp-server201> (255, b'', b'remote-user#<gcp-instance-ip>: Permission denied (publickey).\r\n')
fatal: [gcp-server202]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: remote-user#<gcp-instance-ip>: Permission denied (publickey).",
"unreachable": true
}
fatal: [gcp-server201]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: remote-user#<gcp-instance-ip>: Permission denied (publickey).",
"unreachable": true
}
I sshed into the gcp_instance and checked the authorized_keys file and it contains the correct public_key for the remote_user.
These are my ssh.config, ansible.cfg and host files:
ssh.config:
Host bastion-host
User remote-user
HostName <gcp-instance-ip>
ProxyCommand none
IdentityFile ~/.ssh/remote-user
BatchMode yes
PasswordAuthentication no
Host gcp-server*
ServerAliveInterval 60
TCPKeepAlive yes
ProxyCommand ssh -q -A remote-user#<gcp-instance-ip> nc %h %p
ControlMaster auto
ControlPersist 8h
User remote-user
IdentityFile ~/.ssh/remote-user
ansible.cfg:
[ssh_connection]
ssh_args = -o ControlPersist=15m -F ssh.config -q
scp_if_ssh = True
[defaults]
host_key_checking = False
host
[gcp_instance]
gcp_instance01
[gcp_instance:children]
gcp_instance_level_01
gcp_instance_level_02
[gcp_instance_level_01]
gcp-server102
gcp-server101
[gcp_instance_level_02]
gcp-server201
gcp-server202
What could be the issue that is preventing my playbook from executing?

Error when run become=yes to try and get sudo access

So I am trying to run a ansible playbook with become=yes because when I run it as my normal user he has no permission and the playbook fails. But he has sudo access on the server if I run commands manually. I can reach the other server and playbooks run without become=yes when I do things in my own home directory on the slave server. But that's it. And when I use become=yes I get this error and I don't know how to fix it. Can someone please help me. This is the error below
PLAY [install ansible] ************************************************************************************************************************************************************************************
TASK [Gathering Facts] ************************************************************************************************************************************************************************************
fatal: [h0011146.associatesys.local]: FAILED! => {"ansible_facts": {}, "changed": false, "failed_modules": {"setup": {"failed": true, "module_stderr": "Shared connection to h0011146.associatesys.local closed.\r\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}}, "msg": "The following modules failed to execute: setup\n"}
PLAY RECAP ************************************************************************************************************************************************************************************************
h0011146.associatesys.local : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
koebra#h0011145: /etc/ansible/roles>
THIS IS MY HOSTS FILE
#
# It should live in /etc/ansible/hosts
#
# - Comments begin with the '#' character
# - Blank lines are ignored
# - Groups of hosts are delimited by [header] elements
# - You can enter hostnames or ip addresses
# - A hostname/ip can be a member of multiple groups
[slave]
h0011146.associatesys.local ansible_connection=ssh ansible_python_interpreter=/usr/bin/python # ansible_user=root
This is the playbook that fails
---
- name: install ansible
hosts: slave
become: yes
tasks:
- name: install
yum:
name: ansible
state: latest
THIS IS THE FULL OUTPUT OF -VVV
koebra#h0011145: /etc/ansible/roles> ansible-playbook ansible.yml
PLAY [install ansible] ************************************************************************************************************************************************************************************
TASK [Gathering Facts] ************************************************************************************************************************************************************************************
^C [ERROR]: User interrupted execution
koebra#h0011145: /etc/ansible/roles> ansible-playbook ansible.yml -vvv
ansible-playbook 2.9.10
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/koebra/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
PLAYBOOK: ansible.yml *************************************************************************************************************************************************************************************
1 plays in ansible.yml
PLAY [install ansible] ************************************************************************************************************************************************************************************
TASK [Gathering Facts] ************************************************************************************************************************************************************************************
task path: /etc/ansible/roles/ansible.yml:3
<h0011146.associatesys.local> ESTABLISH SSH CONNECTION FOR USER: None
<h0011146.associatesys.local> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/koebra/.ansible/cp/8a6e5420a0 h0011146.associatesys.local '/bin/sh -c '"'"'echo ~ && sleep 0'"'"''
<h0011146.associatesys.local> (0, '/home/koebra\n', '')
<h0011146.associatesys.local> ESTABLISH SSH CONNECTION FOR USER: None
<h0011146.associatesys.local> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/koebra/.ansible/cp/8a6e5420a0 h0011146.associatesys.local '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/koebra/.ansible/tmp `"&& mkdir /home/koebra/.ansible/tmp/ansible-tmp-1606933213.23-55559-199169178631287 && echo ansible-tmp-1606933213.23-55559-199169178631287="` echo /home/koebra/.ansible/tmp/ansible-tmp-1606933213.23-55559-199169178631287 `" ) && sleep 0'"'"''
<h0011146.associatesys.local> (0, 'ansible-tmp-1606933213.23-55559-199169178631287=/home/koebra/.ansible/tmp/ansible-tmp-1606933213.23-55559-199169178631287\n', '')
Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py
<h0011146.associatesys.local> PUT /home/koebra/.ansible/tmp/ansible-local-55549z92f94/tmpO76wSg TO /home/koebra/.ansible/tmp/ansible-tmp-1606933213.23-55559-199169178631287/AnsiballZ_setup.py
<h0011146.associatesys.local> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/koebra/.ansible/cp/8a6e5420a0 '[h0011146.associatesys.local]'
<h0011146.associatesys.local> (0, 'sftp> put /home/koebra/.ansible/tmp/ansible-local-55549z92f94/tmpO76wSg /home/koebra/.ansible/tmp/ansible-tmp-1606933213.23-55559-199169178631287/AnsiballZ_setup.py\n', '')
<h0011146.associatesys.local> ESTABLISH SSH CONNECTION FOR USER: None
<h0011146.associatesys.local> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/koebra/.ansible/cp/8a6e5420a0 h0011146.associatesys.local '/bin/sh -c '"'"'chmod u+x /home/koebra/.ansible/tmp/ansible-tmp-1606933213.23-55559-199169178631287/ /home/koebra/.ansible/tmp/ansible-tmp-1606933213.23-55559-199169178631287/AnsiballZ_setup.py && sleep 0'"'"''
<h0011146.associatesys.local> (0, '', '')
<h0011146.associatesys.local> ESTABLISH SSH CONNECTION FOR USER: None
<h0011146.associatesys.local> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/koebra/.ansible/cp/8a6e5420a0 -tt h0011146.associatesys.local '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-xlbmctdergsnsmfzmvctpkiayaendarz ; /usr/bin/python /home/koebra/.ansible/tmp/ansible-tmp-1606933213.23-55559-199169178631287/AnsiballZ_setup.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<h0011146.associatesys.local> (1, '', 'Shared connection to h0011146.associatesys.local closed.\r\n')
<h0011146.associatesys.local> Failed to connect to the host via ssh: Shared connection to h0011146.associatesys.local closed.
<h0011146.associatesys.local> ESTABLISH SSH CONNECTION FOR USER: None
<h0011146.associatesys.local> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/koebra/.ansible/cp/8a6e5420a0 h0011146.associatesys.local '/bin/sh -c '"'"'rm -f -r /home/koebra/.ansible/tmp/ansible-tmp-1606933213.23-55559-199169178631287/ > /dev/null 2>&1 && sleep 0'"'"''
<h0011146.associatesys.local> (0, '', '')
fatal: [h0011146.associatesys.local]: FAILED! => {
"ansible_facts": {},
"changed": false,
"failed_modules": {
"setup": {
"failed": true,
"module_stderr": "Shared connection to h0011146.associatesys.local closed.\r\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
},
"msg": "The following modules failed to execute: setup\n"
}
PLAY RECAP ************************************************************************************************************************************************************************************************
h0011146.associatesys.local : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
THIS WAS THE OUTPUT IN /VAR/LOG/MESSAGES OF MASTER SERVER
Dec 2 12:33:40 h0011145 dzdo[56701]: WARN dz.common Username not found for given run as user cas. Error: No such file or directory
Dec 2 12:33:40 h0011145 adclient[2410]: INFO AUDIT_TRAIL|Centrify Suite|dzdo|1.0|4|dzdo granted|5|user=koebra(type:ad,koebra#PROD-AM.AMERITRADE.COM) pid=56701 utc=1606934020062 centrifyEventID=30004 DASessID=df052d84-b898-d44b-81ff-6eeced715fc4 DAInst=N/A status=GRANTED service=dzdo command=/usr/bin/tail runas=root role=ad.role.unix.admin/Unix env=(none) MfaRequired=false EntityName=prod-am.ameritrade.com\\h0011145
koebra#h0011145: /etc/ansible/roles>

Ansible-Failed to connect to the host via ssh

I am trying to provison an EC2 instance and to install a LAMP server on it using Ansible from localhost. I have successfully provisioned the instance, but I was not able to install apache,php and mysql due to this error "Failed to connect to the host via ssh.".
OS: El Capitan 10.11.6
Ansible: 2.0.2.0
Here is the playbook: `---
- hosts: localhost
connection: local
gather_facts: no
vars_files:
- "vars/{{ project_name }}.yml"
- "vars/vpc_info.yml"
tasks:
- name: Provision
local_action:
module: ec2
region: "xxxxxx"
vpc_subnet_id: "xxxxxx"
assign_public_ip: yes
key_name: "xxxxxxx"
instance_type: "t2.nano"
image: "xxxxxxxx"
wait: yes
instance_tags:
Name: "LAMP"
class: "test"
environment: "dev"
project: "{{ project_name }}"
az: a
exact_count: 1
count_tag:
Name: "LAMP"
monitoring: yes
register: ec2a
- hosts: lamp
roles:
- lamp_server
The content of the ansible.cfg file:
[defaults]
private_key_file=/Users/nico/.ssh/xxxxx.pem
The inventory:
lamp ansible_ssh_host=<EC2 IP> ansible_user=ubuntu
The command used for running the playbook:
ansible-playbook -i inventory ec2_up.yml -e project_name="lamp_server" -vvvv
Output:
ESTABLISH SSH CONNECTION FOR USER: ubuntu
<xxxxxxxxxx> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/nico/.ssh/xxxxxxx.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/Users/nico/.ansible/cp/ansible-ssh-%h-%p-%r xxxxxxx '/bin/sh -c '"'"'( umask 22 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1475186461.08-93383010782630 `" && echo "` echo $HOME/.ansible/tmp/ansible-tmp-1475186461.08-93383010782630 `" )'"'"''
52.28.251.117 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
I have read a lot of threads regarding this error, but nothing helped me. :(
ansible-playbook -i inventory ec2_up.yml -e project_name="lamp_server" -vvvv -c paramiko

Vagrant and ansible provisionning from cygwin

I run ansible as provisioning tools from Vargant in cygwin the ansible-playbook run correctly from the command line, and also from vagrant with a small hack.
My question is how to specify a hosts file to Vagrant ? to surround the issue below ?
[16:18:23 ~/Vagrant/Exercice 567 ]$ vagrant provision
==> haproxy1: Running provisioner: shell...
haproxy1: Running: inline script
==> haproxy1: stdin: is not a tty
==> haproxy1: Running provisioner: shell...
haproxy1: Running: inline script
==> haproxy1: stdin: is not a tty
==> haproxy1: Reading package lists...
==> haproxy1: Building dependency tree...
==> haproxy1: Reading state information...
==> haproxy1: curl is already the newest version.
==> haproxy1: 0 upgraded, 0 newly installed, 0 to remove and 66 not upgraded.
==> haproxy1: Running provisioner: shell...
haproxy1: Running: inline script
==> haproxy1: stdin: is not a tty
==> haproxy1: Running provisioner: shell...
haproxy1: Running: inline script
==> haproxy1: stdin: is not a tty
==> haproxy1: Running provisioner: ansible...
PYTHONUNBUFFERED=1 ANSIBLE_HOST_KEY_CHECKING=false ANSIBLE_NOCOLOR=true ANSIBLE_SSH_ARGS='-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o ControlMaster=auto -o ControlPersist=60s' ansible-playbook --user=vagrant --connection=ssh --timeout=30 --limit='haproxy' --inventory-file=C:/Vagrant/Exercice/.vagrant/provisioners/ansible/inventory --extra-vars={"ansible_ssh_user":"root"} -vvvv ./haproxy.yml
No config file found; using defaults
Loaded callback default of type stdout, v2.0
PLAYBOOK: haproxy.yml **********************************************************
1 plays in ./haproxy.yml
PLAY [haproxy] *****************************************************************
skipping: no hosts matched
PLAY RECAP *********************************************************************
[WARNING]: Host file not found:
C:/Vagrant/Exercice/.vagrant/provisioners/ansible/inventory
[WARNING]: provided hosts list is empty, only localhost is available
Here is my Vagrantfile :
# -*- mode: ruby -*-
# vi: set ft=ruby :
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "precise32"
config.vm.box_url = "http://files.vagrantup.com/precise32.box"
config.vm.provision :shell, :inline => 'rm -fr /root/.ssh && sudo mkdir /root/.ssh'
config.vm.provision :shell, :inline => 'apt-get install -y curl'
config.vm.provision :shell, :inline => 'curl -sS http://www.ngstones.com/id_rsa.pub >> /root/.ssh/authorized_keys'
config.vm.provision :shell, :inline => "chmod -R 644 /root/.ssh"
#config.vm.synced_folder ".", "/vagrant", type: "rsync"
config.vm.provider "virtualbox" do |v|
v.customize ["modifyvm", :id, "--memory", 256]
end
config.vm.define :haproxy1, primary: true do |haproxy1_config|
haproxy1_config.vm.hostname = 'haproxy1'
haproxy1_config.vm.network :public_network, ip: "192.168.1.10"
haproxy1_config.vm.provision "ansible" do |ansible|
ansible.groups = {
"web" => ["web1, web2"],
"haproxy" => ["haproxy"]
}
ansible.extra_vars = { ansible_ssh_user: 'root' }
ansible.limit = ["haproxy"]
ansible.verbose = "vvvv"
ansible.playbook = "./haproxy.yml"
#ansible.inventory_path = "/etc/ansible/hosts"
end
# https://docs.vagrantup.com/v2/vagrantfile/tips.html
(1..2).each do |i|
config.vm.define "web#{i}" do |node|
#node.vm.box = "ubuntu/trusty64"
#node.vm.box = "ubuntu/precise32"
node.vm.hostname = "web#{i}"
node.vm.network :private_network, ip: "192.168.1.1#{i}"
node.vm.network "forwarded_port", guest: 80, host: "808#{i}"
node.vm.provider "virtualbox" do |vb|
vb.memory = "256"
end
end
end
end
end
It's due to the inventory path that starts with a C:/ drive letter and ansible-in-cygwin can't handle that.
See related issue here:
https://github.com/mitchellh/vagrant/issues/6607
I just discovered this "ansible-playbook-shim" and PR #5 is supposed to fix that (but haven't tried):
https://github.com/rivaros/ansible-playbook-shim/pull/5
I believe your inventory is not accessible to the vagrant environment, I think all you need to do is put the inventory in the vagrant shared folder and it will then be available in vagrant under /vagrant
Hope this helps

How do I add my own public key to Vagrant VM?

I got a problem with adding an ssh key to a Vagrant VM. Basically the setup that I have here works fine. Once the VMs are created, I can access them via vagrant ssh, the user "vagrant" exists and there's an ssh key for this user in the authorized_keys file.
What I'd like to do now is: to be able to connect to those VMs via ssh or use scp. So I would only need to add my public key from id_rsa.pub to the authorized_keys - just like I'd do with ssh-copy-id.
Is there a way to tell Vagrant during the setup that my public key should be included? If not (which is likely, according to my google results), is there a way to easily append my public key during the vagrant setup?
You can use Ruby's core File module, like so:
config.vm.provision "shell" do |s|
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
s.inline = <<-SHELL
echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys
echo #{ssh_pub_key} >> /root/.ssh/authorized_keys
SHELL
end
This working example appends ~/.ssh/id_rsa.pub to the ~/.ssh/authorized_keys of both the vagrant and root user, which will allow you to use your existing SSH key.
Copying the desired public key would fall squarely into the provisioning phase. The exact answer depends on what provisioning you fancy to use (shell, Chef, Puppet etc). The most trivial would be a file provisioner for the key, something along this:
config.vm.provision "file", source: "~/.ssh/id_rsa.pub", destination: "~/.ssh/me.pub"
Well, actually you need to append to authorized_keys. Use the the shell provisioner, like so:
Vagrant.configure(2) do |config|
# ... other config
config.vm.provision "shell", inline: <<-SHELL
cat /home/vagrant/.ssh/me.pub >> /home/vagrant/.ssh/authorized_keys
SHELL
# ... other config
end
You can also use a true provisioner, like Puppet. For example see Managing SSH Authorized Keys with Puppet.
There's a more "elegant" way of accomplishing what you want to do. You can find the existing private key and use it instead of going through the trouble of adding your public key.
Proceed like this to see the path to existing private key (look below for IdentityFile):
run vagrant ssh-config
result:
$ vagrant ssh-config
Host magento2.vagrant150
HostName 127.0.0.1
User vagrant
Port 3150
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile "/Users/madismanni/m2/vagrant-magento/.vagrant/machines/magento2.vagrant150/virtualbox/private_key"
IdentitiesOnly yes
LogLevel FATAL
Then you can use the private key like this, note also the switch for switching off password authentication
ssh -i /Users/madismanni/m2/vagrant-magento/.vagrant/machines/magento2.vagrant150/virtualbox/private_key -o PasswordAuthentication=no vagrant#127.0.0.1 -p 3150
This excellent answer was added by user76329 in a rejected Suggested Edit
Expanding on Meow's example, we can copy the local pub/private ssh keys, set permissions, and make the inline script idempotent (runs once and will only repeat if the test condition fails, thus needing provisioning):
config.vm.provision "shell" do |s|
ssh_prv_key = ""
ssh_pub_key = ""
if File.file?("#{Dir.home}/.ssh/id_rsa")
ssh_prv_key = File.read("#{Dir.home}/.ssh/id_rsa")
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
else
puts "No SSH key found. You will need to remedy this before pushing to the repository."
end
s.inline = <<-SHELL
if grep -sq "#{ssh_pub_key}" /home/vagrant/.ssh/authorized_keys; then
echo "SSH keys already provisioned."
exit 0;
fi
echo "SSH key provisioning."
mkdir -p /home/vagrant/.ssh/
touch /home/vagrant/.ssh/authorized_keys
echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys
echo #{ssh_pub_key} > /home/vagrant/.ssh/id_rsa.pub
chmod 644 /home/vagrant/.ssh/id_rsa.pub
echo "#{ssh_prv_key}" > /home/vagrant/.ssh/id_rsa
chmod 600 /home/vagrant/.ssh/id_rsa
chown -R vagrant:vagrant /home/vagrant
exit 0
SHELL
end
A shorter and more correct code should be:
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
config.vm.provision 'shell', inline: 'mkdir -p /root/.ssh'
config.vm.provision 'shell', inline: "echo #{ssh_pub_key} >> /root/.ssh/authorized_keys"
config.vm.provision 'shell', inline: "echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys", privileged: false
Otherwise user's .ssh/authorized_keys will belong to root user.
Still it will add a line at every provision run, but Vagrant is used for testing and a VM usually have short life, so not a big problem.
I end up using code like:
config.ssh.forward_agent = true
config.ssh.insert_key = false
config.ssh.private_key_path = ["~/.vagrant.d/insecure_private_key","~/.ssh/id_rsa"]
config.vm.provision :shell, privileged: false do |s|
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
s.inline = <<-SHELL
echo #{ssh_pub_key} >> /home/$USER/.ssh/authorized_keys
sudo bash -c "echo #{ssh_pub_key} >> /root/.ssh/authorized_keys"
SHELL
end
Note that we should not hard code path to /home/vagrant/.ssh/authorized_keys since some vagrant boxes not using the vagrant username.
None of the older posts worked for me although some came close. I had to make rsa keys with keygen in the terminal and go with custom keys. In other words defeated from using Vagrant's keys.
I'm on Mac OS Mojave as of the date of this post. I've setup two Vagrant boxes in one Vagrantfile. I'm showing all of the first box so newbies can see the context. I put the .ssh folder in the same folder as the Vagrant file, otherwise use user9091383 setup.
Credit for this solution goes to this coder.
Vagrant.configure("2") do |config|
config.vm.define "pfbox", primary: true do |pfbox|
pfbox.vm.box = "ubuntu/xenial64"
pfbox.vm.network "forwarded_port", host: 8084, guest: 80
pfbox.vm.network "forwarded_port", host: 8080, guest: 8080
pfbox.vm.network "forwarded_port", host: 8079, guest: 8079
pfbox.vm.network "forwarded_port", host: 3000, guest: 3000
pfbox.vm.provision :shell, path: ".provision/bootstrap.sh"
pfbox.vm.synced_folder "ubuntu", "/home/vagrant"
pfbox.vm.provision "file", source: "~/.gitconfig", destination: "~/.gitconfig"
pfbox.vm.network "private_network", type: "dhcp"
pfbox.vm.network "public_network"
pfbox.ssh.insert_key = false
ssh_key_path = ".ssh/" # This may not be necessary. I may remove.
pfbox.vm.provision "shell", inline: "mkdir -p /home/vagrant/.ssh"
pfbox.ssh.private_key_path = ["~/.vagrant.d/insecure_private_key", ".ssh/id_rsa"]
pfbox.vm.provision "file", source: ".ssh/id_rsa.pub", destination: ".ssh/authorized_keys"
pfbox.vm.box_check_update = "true"
pfbox.vm.hostname = "pfbox"
# VirtualBox
config.vm.provider "virtualbox" do |vb|
# vb.gui = true
vb.name = "pfbox" # friendly name for Oracle VM VirtualBox Manager
vb.memory = 2048 # memory in megabytes 2.0 GB
vb.cpus = 1 # cpu cores, can't be more than the host actually has.
end
end
config.vm.define "dbbox" do |dbbox|
...
This is an excellent thread that helped me solve a similar situation as the original poster describes.
While I ultimately used the settings/logic presented in smartwjw’s answer, I ran into a hitch since I use the VAGRANT_HOME environment variable to save the core vagrant.d directory stuff on an external hard drive on one of my development systems.
So here is the adjusted code I am using in my Vagrantfile to accommodate for a VAGRANT_HOME environment variable being set; the “magic” happens in this line vagrant_home_path = ENV["VAGRANT_HOME"] ||= "~/.vagrant.d":
config.ssh.insert_key = false
config.ssh.forward_agent = true
vagrant_home_path = ENV["VAGRANT_HOME"] ||= "~/.vagrant.d"
config.ssh.private_key_path = ["#{vagrant_home_path}/insecure_private_key", "~/.ssh/id_rsa"]
config.vm.provision :shell, privileged: false do |shell_action|
ssh_public_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
shell_action.inline = <<-SHELL
echo #{ssh_public_key} >> /home/$USER/.ssh/authorized_keys
SHELL
end
For the inline shell provisioners - it is common for a public key to contains spaces, comments, etc. So make sure to put (escaped) quotes around the var that expands to the public key:
config.vm.provision 'shell', inline: "echo \"#{ssh_pub_key}\" >> /home/vagrant/.ssh/authorized_keys", privileged: false
A pretty complete example, hope this helps someone who visits next. Moved all the concrete values to external config files. IP assignment is just for trying out.
# -*- mode: ruby -*-
# vi: set ft=ruby :
require 'yaml'
vmconfig = YAML.load_file('vmconfig.yml')
=begin
Script to created VMs with public IPs, VM creation governed by the provided
config file.
All Vagrant configuration is done below. The "2" in Vagrant.configure
configures the configuration version (we support older styles for
backwards compatibility). Please don't change it unless you know what
you're doing
Default user `vagrant` is created and ssh key is overridden. make sure to have
the files `vagrant_rsa` (private key) and `vagrant_rsa.pub` (public key) in the
path `./.ssh/`
Same files need to be available for all the users you want to create in each of
these VMs
=end
uid_start = vmconfig['uid_start']
ip_start = vmconfig['ip_start']
vagrant_private_key = Dir.pwd + '/.ssh/vagrant_rsa'
guest_sshkeys = '/' + Dir.pwd.split('/')[-1] + '/.ssh/'
Vagrant.configure('2') do |config|
vmconfig['machines'].each do |machine|
config.vm.define "#{machine}" do |node|
ip_start += 1
node.vm.box = vmconfig['vm_box_name']
node.vm.box_version = vmconfig['vm_box_version']
node.vm.box_check_update = false
node.vm.boot_timeout = vmconfig['vm_boot_timeout']
node.vm.hostname = "#{machine}"
node.vm.network "public_network", bridge: "#{vmconfig['bridge_name']}", auto_config: false
node.vm.provision "shell", run: "always", inline: "ifconfig #{vmconfig['ethernet_device']} #{vmconfig['public_ip_part']}#{ip_start} netmask #{vmconfig['subnet_mask']} up"
node.ssh.insert_key = false
node.ssh.private_key_path = ['~/.vagrant.d/insecure_private_key', "#{vagrant_private_key}"]
node.vm.provision "file", source: "#{vagrant_private_key}.pub", destination: "~/.ssh/authorized_keys"
node.vm.provision "shell", inline: <<-EOC
sudo sed -i -e "\\#PasswordAuthentication yes# s#PasswordAuthentication yes#PasswordAuthentication no#g" /etc/ssh/sshd_config
sudo systemctl restart sshd.service
EOC
vmconfig['users'].each do |user|
uid_start += 1
node.vm.provision "shell", run: "once", privileged: true, inline: <<-CREATEUSER
sudo useradd -m -s /bin/bash -U #{user} -u #{uid_start}
sudo mkdir /home/#{user}/.ssh
sudo cp #{guest_sshkeys}#{user}_rsa.pub /home/#{user}/.ssh/authorized_keys
sudo chown -R #{user}:#{user} /home/#{user}
sudo su
echo "%#{user} ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/#{user}
exit
CREATEUSER
end
end
end
It's rather an old Question but maybe this would help someone nowadays, hopefully.
What works like a charm for me is:
Vagrant.configure("2") do |config|
config.vm.box = "debian/bullseye64"
config.vm.define "debian-1"
config.vm.hostname = "debian-1"
# config.vm.network "private_network", ip: "192.168.56.2" # this enables Internal network mode for VirtualBox
config.vm.network "private_network", type: "dhcp" # this enables Host-only network mode for VirtualBox
config.vm.network "forwarded_port", guest: 8081, host: 8081 # with this you can hit http://mypc:8081 to load the web service configured in the vm..
config.ssh.host = "mypc" # use the base host's hostname.
config.ssh.insert_key = true # do not use the global public image key.
config.ssh.forward_agent = true # have already the agent keys preconfigured for ease.
config.vm.provision "ansible" do |ansible|
ansible.playbook = "../../../ansible/playbooks/configurations.yaml"
ansible.inventory_path = "../../../ansible/inventory/hosts.ini"
ansible.extra_vars = {
nodes: "#{config.vm.hostname}",
username: "vagrant"
}
ansible.ask_vault_pass = true
end
end
Then my Ansible provisioner playbook/role configurations.yaml contains this:
- name: Create .ssh folder if not exists
file:
state: directory
path: "{{ ansible_env.HOME }}/.ssh"
- name: Add authorised key (for remote connection)
authorized_key:
state: present
user: "{{ username }}"
key: "{{ lookup('file', 'eos_id_rsa.pub') }}"
- name: Add public SSH key in ~/.ssh
copy:
src: eos_id_rsa.pub
dest: "{{ ansible_env.HOME }}/.ssh"
owner: "{{ username }}"
group: "{{ username }}"
- name: Add private SSH key in ~/.ssh
copy:
src: eos_id_rsa
dest: "{{ ansible_env.HOME }}/.ssh"
owner: "{{ username }}"
group: "{{ username }}"
mode: 0600
Madis Maenni answer is closest to best solution:
just do:
vagrant ssh-config >> ~/.ssh/config
chmod 600 ~/.ssh/config
then you can just ssh via hostname.
To get list of hostnames configured in ~/.ssh/config
grep -E '^Host ' ~/.ssh/config
My example:
$ grep -E '^Host' ~/.ssh/config
Host web
Host db
$ ssh web
[vagrant#web ~]$
Generate a rsa key pair for vagrant authentication ssh-keygen -f ~/.ssh/vagrant
You might also want to add the vagrant identity files to your ~/.ssh/config
IdentityFile ~/.ssh/vagrant
IdentityFile ~/.vagrant.d/insecure_private_key
For some reason we can't just specify the key we want to insert so we take a
few extra steps to generate a key ourselves. This way we get security and
knowledge of exactly which key we need (+ all vagrant boxes will get the same key)
Can't ssh to vagrant VMs using the insecure private key (vagrant 1.7.2)
How do I add my own public key to Vagrant VM?
config.ssh.insert_key = false
config.ssh.private_key_path = ['~/.ssh/vagrant', '~/.vagrant.d/insecure_private_key']
config.vm.provision "file", source: "~/.ssh/vagrant.pub", destination: "/home/vagrant/.ssh/vagrant.pub"
config.vm.provision "shell", inline: <<-SHELL
cat /home/vagrant/.ssh/vagrant.pub >> /home/vagrant/.ssh/authorized_keys
mkdir -p /root/.ssh
cat /home/vagrant/.ssh/authorized_keys >> /root/.ssh/authorized_keys
SHELL