Vagrant up can't find private_key_path - ssh

When I try to run vagrant up I get the error:
There are errors in the configuration of this machine. Please fix
the following errors and try again:
SSH:
* `private_key_path` file must exist: /home/buildbot/mykey.pem
However, this file definitely exists. If I run ls -lah /home/buildbot/mykey.pem, it's there. It's owned by my user "buildbot". It has the right permissions. Everything looks good, but yet Vagrant can't see it, even though it's running as user "buildbot". Why would this be?
My Vagrantfile is a fairly generic one for AWS:
# -*- mode: ruby -*-
# vi: set ft=ruby :
require 'vagrant-aws'
Vagrant.configure(2) do |config|
config.vm.box = 'aws-dummy'
config.vm.provider :aws do |aws, override|
aws.keypair_name = 'my-key-pair'
aws.security_groups = ['my-security-group']
aws.access_key_id = ENV['AWS_ACCESS_KEY']
aws.secret_access_key = ENV['AWS_SECRET_KEY']
aws.ami = 'ami-43c92455'
override.ssh.username = 'ubuntu'
override.ssh.private_key_path = ENV['AWS_PRIVATE_KEY_PATH']
end
end

Related

Running a command on a vagrant box via ssh keeps asking for a password

I am trying to run a command on/in a vagrant box using ssh.
According to the documentation, vagrant ssh -c <command> should connect to the machine via ssh and run the command.
I tried this using a simple Ubuntu Server 16.04 box, but every time I get prompted for a password. Simply running vagrant ssh allows me to connect without providing a password.
I used the following Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "osslack/ubuntu-server-16.04-no-unattended-upgrades"
config.vm.box_version = "1.0"
end
I tried to test it with the following command: vagrant ssh -c "ls".
How can I run a command via ssh without being prompted for a password?
So, after playing around with it some more, I found a workaround/solution.
When using vagrant ssh, anything after -- will be directly passed to ssh.
So running vagrant ssh -- ls will tell ssh to run the command ls.
This does not prompt for a password.

Ansible provisioning ERROR! Using a SSH password instead of a key is not possible

I would like to provision with my three nodes from the last one by using Ansible.
My host machine is Windows 10.
My Vagrantfile looks like:
Vagrant.configure("2") do |config|
(1..3).each do |index|
config.vm.define "node#{index}" do |node|
node.vm.box = "ubuntu"
node.vm.box = "../boxes/ubuntu_base.box"
node.vm.network :private_network, ip: "192.168.10.#{10 + index}"
if index == 3
node.vm.provision :setup, type: :ansible_local do |ansible|
ansible.playbook = "playbook.yml"
ansible.provisioning_path = "/vagrant/ansible"
ansible.inventory_path = "/vagrant/ansible/hosts"
ansible.limit = :all
ansible.install_mode = :pip
ansible.version = "2.0"
end
end
end
end
end
My playbook looks like:
---
# my little playbook
- name: My little playbook
hosts: webservers
gather_facts: false
roles:
- create_user
My hosts file looks like:
[webservers]
192.168.10.11
192.168.10.12
[dbservers]
192.168.10.11
192.168.10.13
[all:vars]
ansible_connection=ssh
ansible_ssh_user=vagrant
ansible_ssh_pass=vagrant
After executing vagrant up --provision I got the following error:
Bringing machine 'node1' up with 'virtualbox' provider...
Bringing machine 'node2' up with 'virtualbox' provider...
Bringing machine 'node3' up with 'virtualbox' provider...
==> node3: Running provisioner: setup (ansible_local)...
node3: Running ansible-playbook...
PLAY [My little playbook] ******************************************************
TASK [create_user : Create group] **********************************************
fatal: [192.168.10.11]: FAILED! => {"failed": true, "msg": "ERROR! Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support this. Please add this host's fingerprint to your known_hosts file to manage this host."}
fatal: [192.168.10.12]: FAILED! => {"failed": true, "msg": "ERROR! Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support this. Please add this host's fingerprint to your known_hosts file to manage this host."}
PLAY RECAP *********************************************************************
192.168.10.11 : ok=0 changed=0 unreachable=0 failed=1
192.168.10.12 : ok=0 changed=0 unreachable=0 failed=1
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
I extended my Vagrantfile with ansible.limit = :all and added [all:vars] to the hostfile, but still cannot get through the error.
Has anyone encountered the same issue?
Create a file ansible/ansible.cfg in your project directory (i.e. ansible.cfg in the provisioning_path on the target) with the following contents:
[defaults]
host_key_checking = false
provided that your Vagrant box has sshpass already installed - it's unclear, because the error message in your question suggests it was installed (otherwise it would be "ERROR! to use the 'ssh' connection type with passwords, you must install the sshpass program"), but in your answer you add it explicitly (sudo apt-get install sshpass), like it was not
I'm using Ansible version 2.6.2 and solution with host_key_checking = false doesn't work.
Adding environment variable export ANSIBLE_HOST_KEY_CHECKING=False skipping fingerprint check.
This error can also be solved by simply export ANSIBLE_HOST_KEY_CHECKING variable.
export ANSIBLE_HOST_KEY_CHECKING=False
source: https://github.com/ansible/ansible/issues/9442
This SO post gave the answer.
I just extended the known_hosts file on the machine that is responsible for the provisioning like this:
Snippet from my modified Vagrantfile:
...
if index == 3
node.vm.provision :pre, type: :shell, path: "install.sh"
node.vm.provision :setup, type: :ansible_local do |ansible|
...
My install.sh looks like:
# add web/database hosts to known_hosts (IP is defined in Vagrantfile)
ssh-keyscan -H 192.168.10.11 >> /home/vagrant/.ssh/known_hosts
ssh-keyscan -H 192.168.10.12 >> /home/vagrant/.ssh/known_hosts
ssh-keyscan -H 192.168.10.13 >> /home/vagrant/.ssh/known_hosts
chown vagrant:vagrant /home/vagrant/.ssh/known_hosts
# reload ssh in order to load the known hosts
/etc/init.d/ssh reload
I had a similar challenge when working with Ansible 2.9.6 on Ubuntu 20.04.
When I run the command:
ansible all -m ping -i inventory.txt
I get the error:
target | FAILED! => {
"msg": "Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support this. Please add this host's fingerprint to your known_hosts file to manage this host."
}
Here's how I fixed it:
When you install ansible, it creates a file called ansible.cfg, this can be found in the /etc/ansible directory. Simply open the file:
sudo nano /etc/ansible/ansible.cfg
Uncomment this line to disable SSH key host checking
host_key_checking = False
Now save the file and you should be fine now.
Note: You could also try to add the host's fingerprint to your known_hosts file by SSHing into the server from your machine, this prompts you to save the host's fingerprint to your known_hosts file:
promisepreston#ubuntu:~$ ssh myusername#192.168.43.240
The authenticity of host '192.168.43.240 (192.168.43.240)' can't be established.
ECDSA key fingerprint is SHA256:9Zib8lwSOHjA9khFkeEPk9MjOE67YN7qPC4mm/nuZNU.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.43.240' (ECDSA) to the list of known hosts.
myusername#192.168.43.240's password:
Welcome to Ubuntu 20.04.1 LTS (GNU/Linux 5.4.0-53-generic x86_64)
That's all.
I hope this helps
run the below command, it resolved my issue
export ANSIBLE_HOST_KEY_CHECKING=False && ansible-playbook -i
all provided solutions require changes in global config file or adding environment variable what create problems to onboard new people.
Instead you can add following variable to your inventory or host vars
ansible_ssh_common_args: '-o StrictHostKeyChecking=no'
Adding ansible_ssh_common_args='-o StrictHostKeyChecking=no'
to either your inventory
like:
[all:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=no'
[all:children]
servers
[servers]
host1
OR:
[servers]
host1 ansible_ssh_common_args='-o StrictHostKeyChecking=no'

How to include and reference a custom ssh key in a vagrant base (baseline) box? (virtualbox)

In vagrant documentation i did not found a hint on how to reference an included file from a included Vagrantfile within the same baseline box when using "vagrant package". Can anyone help?
Details:
When creating a new baseline box from scratch for vagrant, you are free to use the standard vagrant insecure ssh key or to create a custom new key. I did the last thing. And this new baseline box works fine with my custom key, when i use my Vagrantfile with this additional line:
config.ssh.private_key_path = "custom_key_file"
Now i decided to distribute my baseline box to my team members. Thats no problem. Just enter:
vagrant package --output custom.box
All other team members do copy the "custom_key_file" to the project root dir and create a "Vagrantfile" with this content (done using a version controll system):
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "custombox"
config.ssh.private_key_path = "custom_key_file"
end
When done, each team member enter the following to get a virtual machine based on custom.box fast and easy:
vagrant box add custombox custom.box
vagrant up
Works fine.
Now i want to tune my baseline box a little bit before distributing. I want to include the "custom_key_file" and a "Vagrantfile.pkg" that reads as follows:
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "custombox"
config.ssh.private_key_path = "custom_key_file"
end
To create the tuned baseline box i enter:
vagrant package --output custom2v.box --vagrantfile Vagrantfile.pkg --include custom_key_file
When i extract the custom2v.box i can see there is this tree:
C:.
│ box-disk1.vmdk
│ box.ovf
│ Vagrantfile
│
└───include
custom_key_file
_Vagrantfile
And "include/_Vagrantfile" has the content of my Vagrantfile.pkg. I can add that box as follows:
vagrant box add custombox2v custom2v.box
to a new project it is now very easy to enable it for vagrant. Just add a "Vagrantfile" read as follows:
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "custombox2v"
end
But when i do a:
vagrant up
i get the following error message:
[...]
Bringing machine 'default' up with 'virtualbox' provider...
There are errors in the configuration of this machine. Please fix
the following errors and try again:
SSH:
* `private_key_path` file must exist: custom_key_file
Can anyone help?
The reason is Vagrant's load order and merging of its configs.
What you want to happen is Vagrant to use the private key located inside the box archive.
What really happens when you run "up" is Vagrant merges your config with few others configs on its "load & merge" path.
So in the end of the way you have one big config with the setting:
config.ssh.private_key_path = "custom_key_file"
So Vagrant will look for custom_key_file in the same folder as your Vagrantfile and that's why you get your error.
Check this answer and this issue for information how to config Vagrant to look for the key relatively to box's Vagrantfile.

Capistrano 3 runs every command twice (new install) - Configuration issue

I just completed my capistrano installation for the first time. Most of everything is left to default settings, I configured my server, its authentification, and the remote folder, as well as the access to my git repository.
I use capistrano to deploy php code to my server.
cap staging deploy and cap production deploy function, but they run every command twice. It sometimes causes problems when those tasks are executed too quickly on the server, returning error codes, which stops the deploying process.
an example of my output when running cap staging deploy
DEBUG[47ecea59] Running /usr/bin/env if test ! -d ~/www/test_server/repo; then echo "Directory does not exist '~/www/test_server/repo'" 1>&2; false; fi on ftp.cluster013.ovh.net
DEBUG[47ecea59] Command: if test ! -d ~/www/test_server/repo; then echo "Directory does not exist '~/www/test_server/repo'" 1>&2; false; fi
DEBUG[c450e730] Running /usr/bin/env if test ! -d ~/www/test_server/repo; then echo "Directory does not exist '~/www/test_server/repo'" 1>&2; false; fi on ftp.cluster013.ovh.net
DEBUG[c450e730] Command: if test ! -d ~/www/test_server/repo; then echo "Directory does not exist '~/www/test_server/repo'" 1>&2; false; fi
It does the same with every single task, except the one I defined myself (in my deploy.rb, I defined a :set_distant_server task that moves around files with server info)
I am pretty sure I missed something during the initial configuration.
Here is my capfile, still to default settings :
# Load DSL and Setup Up Stages
require 'capistrano/setup'
# Includes default deployment tasks
require 'capistrano/deploy'
# Includes tasks from other gems included in your Gemfile
# require 'capistrano/rvm'
# require 'capistrano/rbenv'
# require 'capistrano/chruby'
#require 'capistrano/bundler'
#require 'capistrano/rails/assets'
#require 'capistrano/rails/migrations'
# Loads custom tasks from `lib/capistrano/tasks' if you have any defined.
Dir.glob('lib/capistrano/tasks/*.rake').each { |r| import r }
Followed by my deploy.rb file:
# config valid only for Capistrano 3.1
lock '3.2.1'
set :scm, :git
set :application, 'Application name'
# I use token authentification
set :repo_url, 'https://XXXXXXXXXXX:#XXXXXXX.git'
set :role, 'web'
# Default value for :log_level is :debug
set :log_level, :debug
set :tmp_dir, 'www/test_server/tmp'
set :keep_releases, 8
role :deploy_server, "XXXuser_name#XXXX_server"
task :set_distant do
on roles(:deploy_server) do
execute 'echo ------------******* STAGING *******------------'
execute 'cp ~/www/test_server/current/access_distant.php ~/www/test_server/current/access.php'
execute 'cp ~/www/test_server/current/session_distant.php ~/www/test_server/current/session.php'
end
end
after "deploy:finished", :set_distant
Here is my staging.rb, much shorter:
server 'XXX_server', user: 'XXXuser_name', roles: %w{web}, port: 22, password: 'XXXpassword'
set :deploy_to, '~/www/test_server'
set :branch, 'staging'
And my production.rb, very similar:
server 'XXX_server', user: 'XXXuser_name', roles: %w{web}, port: 22, password: 'XXXpassword'
set :deploy_to, '~/www/beta/'
I'm pretty sure I missed a step in all the prerequisites to make it run nicely. I am new to ruby, to gems, and didn't use shell for a very long time.
Does anyone see why those commands are run twice, and how I could fix it?
In advance, many many thanks.
Additional info:
Ruby version: ruby -v
ruby 2.1.2p95 (2014-05-08 revision 45877) [x86_64-darwin13.0]
Capistrano version: cap -V
Capistrano Version: 3.2.1 (Rake Version: 10.1.0)
I did not create a Gemfile or set it up, I understood it was not needed in Capistrano 3. Anyway, I would not know how to do it.
I was having this same issue and realized I didn't need both
role :web
and
server '<server>'
I got rid of role :web and that got rid of the 2nd execution.

Vagrant was unable to communicate with the guest machine

Im on Ubuntu 12.04 and my Vagranfile looks like this:
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "base"
config.vm.box_url = "http://cloud-images.ubuntu.com/vagrant/precise/current/precise-server-cloudimg-amd64-vagrant-disk1.box"
config.vm.network :forwarded_port, guest: 80, host: 8080
config.vm.provision :puppet do |puppet|
puppet.manifests_path = "puppet/manifests"
puppet.module_path = "puppet/modules"
puppet.manifest_file = "init.pp"
puppet.options="--verbose --debug"
end
end
This was supposed to be running fine, the same configuration works OK in my macbook.
Im using Vagrant 1.3.5 a VirtualBox 4.1.12 (but before I tried with 4.2.18)
I dont know how to fix this, I've been stuck for days now. Any help will be great.
Make sure you have a proper version of GuestAdditions. Or just use vagrant-vbguest plugin that will check and install it for you. In fact, it is must have plugin if you are using VirtualBox.
Try to increase config.ssh.timeout (default: 5 min.)
This is not really an answer, rather possible ways of solving. There is an open issue on vagrant.
This is a Vagrant bug that should be fixed in next version.
So far just make sure that the ~/vagrant.d/insecure_private_key file is owned by the same user that starts Vagrant, and has permissions 600, this should help.