How do I SSH to molecule instance without molecule login - ssh

I'm using molecule and vagrant to deploy centos7 instance. For some reasons, I need to use ssh command access molecule instance, instead of molecule login. The ssh informations will then paste into one of my VS code extension.
Molecule.yml
---
dependency:
name: gilt
driver:
name: vagrant
provider:
name: virtualbox
lint:
name: yamllint
platforms:
- name: openresty-instance
box: centos/7
instance_raw_config_args:
- "ssh.insert_key = false"
- "vm.network 'forwarded_port', guest: 22, host: 22"
- "vm.network 'forwarded_port', guest: 80, host: 8080"
interfaces:
- auto_config: true
network_name: private_network
ip: '192.168.33.111'
provisioner:
name: ansible
log: true
lint:
name: ansible-lint
verifier:
name: testinfra
lint:
name: flake8
The IP above let me access port 80 outside vagrant.
But the ssh command to molecule instance IP is not working.
Error
########################################################### #
WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! #
########################################################### IT IS
POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be
eavesdropping on you right now (man-in-the-middle attack)! It is also
possible that a host key has just been changed. The fingerprint for
the ECDSA key sent by the remote host is
SHA256:wVk4Da5pWWNHLiypvEKAJuwzG/2FLOMgwPkrO4oFBZQ. Please contact
your system administrator. Add correct host key in
/Users/abel/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /Users/abel/.ssh/known_hosts:32 ECDSA
host key for 192.168.33.111 has changed and you have requested strict
checking. Host key verification failed

This message can mean what it says: "that there is something nasty going on" if you have this in an environment with static servers.
But if you have, say, a testing-environment, where you create and destroy virtual machines as a daily procedure, this is a "normal" security warning.
It just means "hey, I now this guy, but his fingerprint doesn't match the one in my document archive". If this is intended (like I said, in a test-environment) - then just go into the "document archive", delete "this guys fingerprint" and "take a new fingerprint of him".
So in your case ("/Users/abel/.ssh/known_hosts:32") just open your "known_hosts"-file, and delete the line 32.
Or use the command:
ssh-keygen -R 192.168.33.111 -f "~/Users/abel/.ssh/known_hosts"

Related

testing ssh connection error "ssh: Could not resolve hostname hostname: Temporary failure in name resolution"

I am following this page on how to test ssh connection.
When i enter this first line:
$ ssh -T git#hostname
I get an error:
ssh: could not resolve hostname
hostname: Temporary failure in name resolution
Make sure you can ping hostname, meaning your DNS does resolve hostname into an IP address.
If not, then SSH would fold back to ~/.ssh/config, looking for a Host hostname entry which would indicate what 'hostname' actually means.
Of course, replace 'hostname' by the actual remote host name you want to reach with this SSH session.
After that, it depends on your OS (Windows, Linux, ...), both for the source and the target.

Ansible Inventory Specifying the Same Host with Different Users and Keys for Initial SSH User Setup and Disabling Root Access

I am attempting to have playbooks that run once to set up a new user and disable root ssh access.
For now, I am doing that by declaring all of my inventory twice. Each host needs an entry that accesses with the root user, used to create a new user, set up ssh settings, and then disable root access.
Then each host needs another entry with the new user that gets created.
My current inventory looks like this. It's only one host for now, but with a larger inventory, the repetition would just take up a ton of unnecessary space:
---
# ./hosts.yaml
---
all:
children:
master_roots:
hosts:
demo_master_root:
ansible_host: a.b.c.d # same ip as below
ansible_user: root
ansible_ssh_private_key_file: ~/.ssh/id_rsa_infra_ops
masters:
hosts:
demo_master:
ansible_host: a.b.c.d # same ip as above
ansible_user: infraops
ansible_ssh_private_key_file: ~/.ssh/id_rsa_infra_ops
Is there a cleaner way to do this?
Is this an anti-pattern in any way? It is not idempotent. It would be nice to have this run in a way that running the same playbook twice always has the same output - either "success", or "no change".
I am using DigitalOcean and they have a functionality to have this done via a bash script before the VM comes up for the first time, but I would prefer a platform-independent solution.
Here is the playbook for setting up the users & ssh settings and disabling root access
---
# ./initial-host-setup.yaml
---
# References
# Digital Ocean recommended droplet setup script:
# - https://docs.digitalocean.com/droplets/tutorials/recommended-setup
# Digital Ocean tutorial on installing kubernetes with Ansible:
# - https://www.digitalocean.com/community/tutorials/how-to-create-a-kubernetes-cluster-using-kubeadm-on-debian-9
# Ansible Galaxy (Community) recipe for securing ssh:
# - https://github.com/vitalk/ansible-secure-ssh
---
- hosts: master_roots
become: 'yes'
tasks:
- name: create the 'infraops' user
user:
state: present
name: infraops
password_lock: 'yes'
groups: sudo
append: 'yes'
createhome: 'yes'
shell: /bin/bash
- name: add authorized keys for the infraops user
authorized_key: 'user=infraops key="{{item}}"'
with_file:
'{{ hostvars[inventory_hostname].ansible_ssh_private_key_file }}.pub'
- name: allow infraops user to have passwordless sudo
lineinfile:
dest: /etc/sudoers
line: 'infraops ALL=(ALL) NOPASSWD: ALL'
validate: visudo -cf %s
- name: disable empty password login for all users
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^#?PermitEmptyPasswords'
line: PermitEmptyPasswords no
notify: restart sshd
- name: disable password login for all users
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^(#\s*)?PasswordAuthentication '
line: PasswordAuthentication no
notify: restart sshd
- name: Disable remote root user login
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^#?PermitRootLogin'
line: 'PermitRootLogin no'
notify: restart sshd
handlers:
- name: restart sshd
service:
name: sshd
state: restarted
Everything after this would use the masters inventory.
EDIT
After some research I have found that "init scripts"/"startup scripts"/"user data" scripts are supported across AWS, GCP, and DigitalOcean, potentially via cloud-init (this is what DigitalOcean uses, didn't research the others), which is cross-provider enough for me to just stick with a bash init script solution.
I would still be interested & curious if someone had a killer Ansible-only solution for this, although I am not sure there is a great way to make this happen without a pre-init script.
Regardless of any ansible limitations, it seems that without using the cloud init script, you can't have this. Either the server starts with a root or similar user to perform these actions, or the server starts without a user with those powers, then you can't perform these actions.
Further, I have seen Ansible playbooks and bash scripts that try to solve the desired "idempotence" (complete with no errors even if root is already disabled) by testing root ssh access, then falling back to another user, but "I can't ssh with root" is a poor test for "is the root user disabled" because there are plenty of ways your ssh access could fail even though the server is still configured to allow root to ssh.
EDIT 2 placing this here, since I can't use newlines in my response to a comment:
β.εηοιτ.βε responded to my assertion:
"but "I can't ssh with root" is a poor test for "is the root user disabled" because there are plenty of ways your ssh access could fail even though the server is still configured to allow root to ssh
with
then, try to ssh with infraops and assert that PermitRootLogin no is in the ssh daemon config file?"
It sounds like the suggestion is:
- attempt ssh with root
- if success, we know user/ssh setup tasks have not completed, so run those tasks
- if failure, attempt ssh with infraops
- if success, go ahead and run everything except the user creation again to ensure ssh config is as desired
- if failure... ? something else is probably wrong, since I can't ssh with either user
I am not sure what this sort of if-then failure recovery actually looks like in an Ansible script
You can overwrite host variables for a given play by using vars.
- hosts: masters
become: 'yes'
vars:
ansible_ssh_user: "root"
ansible_ssh_private_key_file: "~/.ssh/id_rsa_infra_ops"
tasks:
You could only define the demo_master group and alter the ansible_user and ansible_ssh_private_key_file at run time, using command flags --user and --private-key.
So with an host.yaml containing
all:
children:
masters:
hosts:
demo_master:
ansible_host: a.b.c.d # same ip as above
ansible_user: infraops
ansible_ssh_private_key_file: ~/.ssh/id_rsa_infra_ops
And run on - hosts: master, the first run would, for example be with
ansible-playbook initial-host-setup.yaml \
--user root \
--private-key ~/.ssh/id_rsa_root
When the subsequent runs would simply by
ansible-playbook subsequent-host-setup.yaml
Since all the required values are in the inventory already.

using Ansible on a gcp instance to connect to another instance error

I have a server called master-instance-node and a server called slave-instance-node-1. In the master-instance-node I have Ansible installed, I modified the /etc/ansible/hosts file and added the following
[webservers]
slave-instance-node-1
Then I try the following command
ansible webservers -a "w " -u USERNAME but I get the following error:
slave-instance-node-1 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ###########################################################\r\n# WARNING: POSSIBLE DNS SPOOFING DETECTED! #\r\n###########################################################\r\nThe ECDSA host key for slave-instance-node-1 has changed,\r\nand the key for the corresponding IP address XX.XXX.X.XX\r\nis unknown. This could either mean that\r\nDNS SPOOFING is happening or the IP address for the host\r\nand its host key have changed at the same time.\r\n###########################################################\r\n# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! #\r\n###########################################################\r\nIT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!\r\nSomeone could be eavesdropping on you right now (man-in-the-middle attack)!\r\nIt is also possible that a host key has just been changed.\r\nThe fingerprint for the ECDSA key sent by the remote host is\nSHA256:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.\r\nPlease contact your system administrator.\r\nAdd correct host key in /home/USERNAME/.ssh/known_hosts to get rid of this message.\r\nOffending ECDSA key in /home/USERNAME/.ssh/known_hosts:1\r\n remove with:\r\n ssh-keygen -f \"/home/USERNAME/.ssh/known_hosts\" -R \"slave-instance-node-1\"\r\nECDSA host key for slave-instance-node-1 has changed and you have requested strict checking.\r\nHost key verification failed.",
"unreachable": true
}
I thought the known hosts file is updated automatically in GCP. What does this error mean and how do I fix it?
In addition to other commentators, you may check which IP your instances are using. If your DNS is configured for the external IPs, you may prefer static external IP to avoid this error after instance reboot. The external addressees are ephemeral, and the issue may occur not only after redeployment, after reboot also. You may be interested in this doc https://cloud.google.com/compute/docs/ip-addresses#externaladdresses
Thanks to the comments in my question I was able to figure out the answer. First I had to remove the known host using the command remove with: ssh-keygen -f "/home/USERNAME/.ssh/known_hosts" -R "slave-instance-node-1". and I also had to set export ANSIBLE_HOST_KEY_CHECKING=false.
Then, I had to add the following line to the /etc/ansible/hosts file next to the server name/ip ansible_user=USERNAME. And finally I had to add the following line inside the /etc/ansible/ansible.cfg file private_key_file = /path/to/file

Is it possible to add an ssh key to the agent for a private repo in an ansible playbook?

I am using Ansible to provision a Vagrant environment. As part of the provisioning process, I need to connect from the currently-provisioning VM to a private external repository using an ssh key in order to use composer to pull in modules for an application. I've done a lot of reading on this before asking this question, but still can't seem to comprehend what's going on.
What I want to happen is:
As part of the playbook, on the Vagrant VM, I add the ssh key to the private repo to the ssh-agent
Using that private key, I am then able to use composer to require modules from the external source
I've read articles which highlight specifying the key in playbook execution. (E.g. ansible-play -u username --private-key play.yml) As far as I understand, this isn't for me, as I'm calling the playbook via Vagrant file. I've also read articles which mention ssh forwarding. (SSH Agent Forwarding with Ansible). Based on what I have read, this is what I've done:
On the VM being provisioned, I insert a known_hosts file which consists of the host entries of the machines which house the repos I need:
On the VM being provisioned, I have the following in ~/.ssh/config:
Host <VM IP>
ForwardAgent yes
I have the following entries in my ansible.cfg to support ssh forwarding:
[defaults]
transport = ssh
[ssh_connection]
ssh_args=-o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s -o ControlPath=/tmp/ansible-ssh-%h-%p-%r
[privilege_escalation]
pipelining = False
I have also added the following task to the playbook which tries to
use composer:
- name: Add ssh agent line to sudoers
become: true
lineinfile:
dest: /etc/sudoers
state: present
regexp: SSH_AUTH_SOCK
line: Defaults env_keep += "SSH_AUTH_SOCK"
I exit the ansible provisioner and add the private key on the provisioned VM to the agent via a shell provisioner (This is where I suspect I'm going wrong)
Then, I attempt to use composer, or call git via the command module. Like this, for example, to test:
- name: Test connection
command: ssh -T git#github.com
Finally, just in case I wasn't understanding ssh connection forwarding correctly, I assumed that what was supposed to happen was that I needed to first add the key to my local machine's agent, then forward that through to the provisioned VM to use to grab the repositories via composer. So I used ssh-add on my local machine before executing vagrant up and running the provisioner.
No matter what, though, I always get permission denied when I do this. I'd greatly appreciate some understanding as to what I may be missing in my understanding of how ssh forwarding should be working here, as well as any guidance for making this connection happen.
I'm not certain I understand your question correctly, but I often setup machines that connect to a private bitbucket repository in order to clone it. You don't need to (and shouldn't) use agent forwarding for that ("ssh forwarding" is unclear; there's "authentication agent forwarding" and "port forwarding", but you need neither in this case).
Just to be clear with terminology, you are running Ansible in your local machine, you are provisioning the controlled machine, and you want to ssh from the controlled machine to a third-party server.
What I do is I upload the ssh key to the controlled machine, in /root/.ssh (more generally $HOME/.ssh where $HOME is the home directory of the controlled machine user who will connect to the third-party server—in my case that's root). I don't use the names id_rsa and id_rsa.pub, because I don't want to touch the default keys of that user (these might have a different purpose; for example, I use them to backup the controlled machine). So this is the code:
- name: Install bitbucket aptiko_ro ssh key
copy:
dest: /root/.ssh/aptiko_ro_id_rsa
mode: 0600
content: "{{ aptiko_ro_ssh_key }}"
- name: Install bitbucket aptiko_ro ssh public key
copy:
dest: /root/.ssh/aptiko_ro_id_rsa.pub
content: "{{ aptiko_ro_ssh_pub_key }}"
Next, you need to tell the controlled machine ssh this: "When you connect to the third-party server, use key X instead of the default key, and logon as user Y". You tell it in this way:
- name: Install ssh config that uses aptiko_ro keys on bitbucket
copy:
dest: /root/.ssh/config
content: |
Host bitbucket.org
IdentityFile ~/.ssh/aptiko_ro_id_rsa
User aptiko_ro

Vagrant ssh authentication failure

The problem with ssh authentication:
==> default: Clearing any previously set forwarded ports...
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
default: Adapter 2: bridged
==> default: Forwarding ports...
default: 22 => 2222 (adapter 1)
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
default: Error: Connection timeout. Retrying...
default: Error: Connection timeout. Retrying...
default: Error: Connection timeout. Retrying...
default: Error: Connection timeout. Retrying...
default: Error: Authentication failure. Retrying...
default: Error: Authentication failure. Retrying...
default: Error: Authentication failure. Retrying...
default: Error: Authentication failure. Retrying...
default: Error: Authentication failure. Retrying...
I can Ctrl+C out of the authentication loop and then successfully ssh in manually.
I performed the following steps on the guest box:
Enabled Remote Login for All Users.
Created the ~/.ssh directory with 0700 permissions.
Created the ~/.ssh/authorized_keys file with 0600 permissions.
Pasted this public key
into ~/.ssh/authorized_keys
I've also tried using a private (hostonly) network instead of the public (bridged) network, using this line in the Vagrantfile:
config.vm.network "private_network", ip: "172.16.177.7"
I get the same output (except Adapter 2: hostonly) but then cannot ssh in manually.
I also tried config.vm.network "private_network", ip: "10.0.0.100".
I also tried setting config.ssh.password in the Vagrantfile. This does output SSH auth method: password but still doesn't authenticate.
And I also tried rebuilding the box and rechecking all the above.
It looks like others have had success with this configuration, so there must be something I'm doing wrong.
I found this thread and enabled the GUI, but that doesn't help.
For general information: by default to ssh-connect you may simply use
user: vagrant password: vagrant
https://www.vagrantup.com/docs/boxes/base.html#quot-vagrant-quot-user
First, try: to see what vagrant insecure_private_key is in your machine config
$ vagrant ssh-config
Example:
$ vagrant ssh-config
Host default
HostName 127.0.0.1
User vagrant
Port 2222
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile C:/Users/konst/.vagrant.d/insecure_private_key
IdentitiesOnly yes
LogLevel FATAL
http://docs.vagrantup.com/v2/cli/ssh_config.html
Second, do:
Change the contents of file insecure_private_key with the contents of your personal system private key
Or use:
Add it to the Vagrantfile:
Vagrant.configure("2") do |config|
config.ssh.private_key_path = "~/.ssh/id_rsa"
config.ssh.forward_agent = true
end
config.ssh.private_key_path is your local private key
Your private key must be available to the local ssh-agent. You can check with ssh-add -L. If it's not listed, add it with ssh-add ~/.ssh/id_rsa
Don't forget to add your public key to ~/.ssh/authorized_keys on the Vagrant VM. You can do it by copy-and-pasting or using a tool like ssh-copy-id (user: root password: vagrant port: 2222) ssh-copy-id '-p 2222 root#127.0.0.1'
If still does not work try this:
Remove insecure_private_key file from c:\Users\USERNAME\.vagrant.d\insecure_private_key
Run vagrant up (vagrant will be generate a new insecure_private_key file)
In other cases, it is helpful to just set forward_agent in Vagrantfile:
Vagrant::Config.run do |config|
config.ssh.forward_agent = true
end
Useful:
Configurating git may be with git-scm.com
After setup this program and creating personal system private key will be in yours profile path: c:\users\USERNAME\.ssh\id_rsa.pub
PS: Finally - suggest you look at Ubuntu on Windows 10
None of the above worked for me. Somehow the box had the wrong public key added in the vagrant user authorised_keys file.
If you can still ssh on the box with the vagrant password (password is vagrant), i.e.
ssh vagrant#localhost -p 2222
then copy the public key content from https://raw.githubusercontent.com/mitchellh/vagrant/master/keys/vagrant.pub to the authorised_keys file with the following command
echo "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ== vagrant insecure public key" > .ssh/authorized_keys
When done exit the VM and try vagrant ssh again. It should work now.
If you experience this issue on vagrant 1.8.5, then check out this thread on github:
https://github.com/mitchellh/vagrant/issues/7610
It's caused basically by a permission issue, the workaround is just
vagrant ssh
password: vagrant
chmod 0600 ~/.ssh/authorized_keys
exit
then
vagrant reload
FYI: this issue only affects CentOS, Ubuntu works fine.
Run the following commands in guest machine/VM:
wget https://raw.githubusercontent.com/mitchellh/vagrant/master/keys/vagrant.pub -O ~/.ssh/authorized_keys
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys
chown -R vagrant:vagrant ~/.ssh
Then do vagrant halt. This will remove and regenerate your private keys.
(These steps assume you have already created or already have the ~/.ssh/ and ~/.ssh/authorized_keys directories under your home folder.)
In my experience, this has been a surprisingly frequent problem with new vagrant machines. By far the easiest way to solve it, instead of altering the configuration itself, has been creating the required ssh keys manually on the client, then using the private key on the host.
Log in to vagrant machine: vagrant ssh, use default password vagrant.
Create ssh keys: for example, ssh-keygen -t rsa -b 4096 -C "vagrant" (as adviced by GitHub's relevant guide).
Rename the public key file (by default id_rsa.pub), overriding the old one: mv .ssh/id_rsa.pub .ssh/authorized_keys.
Reload ssh service in case needed: sudo service ssh reload.
Copy the private key file (by default id_rsa) to the host machine: for instance, use a fine combination of cat and clipboard, cat .ssh/id_rsa, paint and copy (better ways must exist, go invent one!).
Logout from the vagrant machine: logout.
Find the current private key used by vagrant by looking at its configuration: vagrant ssh-config (look for instance ÌdentityFile "/[...]/private_key".
Replace the current private key with the one you created at the host machine: for example, nano /[...]/private_key and paste from the clipboard, if all else fails. (Note, however, that if your private_key is not project specific but shared by multiple vagrant machines, you better configure the path yourself in order to not break other perfectly working machines! Changing the path is as simple as adding a line config.ssh.private_key_path = "path/to/private_key" into the Vagrantfile.) Furthermore, if you are using PuPHPet generated machine, you can store your private key to file puphpet/files/dot/ssh/id_rsa and it will be added to Vagrantfile's ssh config automatically.
Test the setup: vagrant ssh should now work.
Should that be the case, congratulate yourself, logout, run vagrant provision if needed and carry on with the meaningful task at hand.
If you still face problems, it may come handy to add verbose flag to ssh command to ease debugging. You can pass that (or any other option, for that matter) after double dash. For example, typing vagrant ssh -- -v. Feel free to add as many v's as you need, each will give you more information.
Unable to run vagrant up because it gets stuck and times out? I recently had a "water in laptop incident" and had to migrate to a new one(on a MAC by the way). I successfully got all my projects up and running beside the one, which was using vagrant.
$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Clearing any previously set forwarded ports...
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
default: Adapter 2: hostonly
==> default: Forwarding ports...
default: 8000 (guest) => 8877 (host) (adapter 1)
default: 8001 (guest) => 8878 (host) (adapter 1)
default: 8080 (guest) => 7777 (host) (adapter 1)
default: 5432 (guest) => 2345 (host) (adapter 1)
default: 5000 (guest) => 8855 (host) (adapter 1)
default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
default: Warning: Authentication failure. Retrying...
default: Warning: Authentication failure. Retrying...
default: Warning: Authentication failure. Retrying...
It couldn't authenticate, retried again and again and eventually gave up.
This is how I got it back in shape in 3 steps:
1 - Find the IdentityFile used by Vagrant:
$ vagrant ssh-config
Host default
HostName 127.0.0.1
User vagrant
Port 2222
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /Users/ned/.vagrant.d/insecure_private_key
IdentitiesOnly yes
LogLevel FATAL
2 - Check the public key in the IdentityFile:
$ ssh-keygen -y -f <path-to-insecure_private_key>
It'd output something like this:
ssh-rsa AAAAB3Nyc2EAAA...9gE98OHlnVYCzRdK8jlqm8hQ==
3 - Log in to the Vagrant guest with the password vagrant:
ssh -p 2222 -o UserKnownHostsFile=/dev/null vagrant#127.0.0.1
The authenticity of host '[127.0.0.1]:2222 ([127.0.0.1]:2222)' can't be established.
RSA key fingerprint is dc:48:73:c3:18:e4:9d:34:a2:7d:4b:20:6a:e7:3d:3e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '[127.0.0.1]:2222' (RSA) to the list of known hosts.
vagrant#127.0.0.1's password: vagrant
Welcome to Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-31-generic x86_64)
...
NOTE: if vagrant guest is configured to disallow password authentication you need to open VBox' GUI, double click guest name, login as vagrant/vagrant, then sudo -s and edit /etc/ssh/sshd_config and look for PasswordAuthentication no line (usually at the end of the file), replace no with yes and restart sshd (i.e. systemctl reload sshd or /etc/init.d/sshd restart).
4 - Add the public key to the /home/vagrant/authorized_keys file.
$ echo "ssh-rsa AA2EAAA...9gEdK8jlqm8hQ== vagrant" > /home/vagrant/.ssh/authorized_keys
5 - Exit (CTRL+d) and stop the Vagrant guest and then bring it back up.
IMPORTANT if you use any provisioning tools (i.e. Ansible etc) disable it before restarting your guest as Vagrant will think your guest is not provisioned because of use of insecure private key. It will reinstall the key and then run your provisioner!
$ vagrant halt
$ vagrant up
Hopefully you will have your arms in the air now...
I got this, with just a minor amend, from Ned Batchelders article - Ned you are a champ!
This can also happen if you're trying to force your VM to use a root user by default for SSH....
For example, a config like so in your Vagrantfile may cause this failure:
config.ssh.username = 'root'
config.ssh.password = 'vagrant'
config.ssh.insert_key = 'true'
Solution: Comment out those lines and try again!
Problem I was getting the ssh authentication errors, on a box I provisioned. The original was working ok.
The problem for me was I was missing a private key in .vagrant/machines/default/virtualbox/private_key. I copied the private key from the same relative location from the original box and Viola!
I have found a way around the mess with the keys on Win 8.2 where I did not succeed with any of the methods mentioned here. It may be interesting that exactly the same combination of VirtualBox, Vagrant, and the box run on Win 7 Ultimate without any problems.
I switched to the password authentication by adding the following commands in Vagrantfile:
config.ssh.password = "vagrant"
config.ssh.insert_key = false
Note that I'm not sure that this is the only changes required because I already did:
I generated a new RSA key pair and changed authorized_keys file accordingly (all in the virtual machine, see the suggestions above and elsewhere)
I copied the private key to the same directory where Vagrantfile resides and added
config.ssh.private_key_path = "./id_rsa"
But I believe that these changes were irrelevant. I spent a plenty of time trying, so I did not change the working configuration by obvious reasons :)
for me, this was resolved by changing the permissions on .ssh folder in vagrant home directort (i.e. "~vagrant/.ssh"). I think I messed up the permissions when I was setting up ssh keys for my application.
It seems that 'authorized_keys' file must be 'rw' only for 'vagrant' user so "chmod 600 authorized_keys"; the same goes for the directory itself and its parent:
so:
chmod 600 authorized_keys
chmod 700 .
chmod 700 ..
It was only after I had all these permissions restored that vagrant ssh started to work again.
I think it's something to do with ssh security. It refuses to recognise certificates if they are any way accessible beyond the current user, so vagrants attempts to login are thus rejected.
If you are using default SSH setup in your VagrantFile and started seeing SSH authentication errors after re-associating your VM box due to crash, try replacing public key in your vagrant machine.
Vagrant replaces public key associated with insecure private key pair at each log out due to security reasons. If you didn't properly shut down your machine, public/private key pair can go out of sync, causing SSH authentication error.
To resolve this issue, simply load up the current insecure private key and then copy the public key pair into your VM's authorized_keys file.
This might be the last answer in the list but this worked for me and I did not find this answer anywhere, I found it my self after 2 days of researches so you've better try this if nothing else worked for you until now.
In my case the problem came from my VirtualBox. I don't know for what reason an option was disabled and it should have been enabled.
As you can see in the image, there were some network problems with my VirtualBox and what I had to do in order to fix this problem was to select my machine, press on settings, network tab and after that make sure that the option Cable Connected was selected. In my case this option was not selected and I it failed at this step:
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
First I thought that the port is already in use, after that I reinstalled Vagrant and I also tried other things but none of them worked for me.
This has happened to me several times and the way I solved it was :
Check and make sure your Vagrantfile has the correct private key path :
config.ssh.private_key_path = "/home/razvan/.ssh/id_rsa"
Execute > vagrant ssh command in a linux terminal
On your vagrant machine go to
cd /home/vagrant/.ssh
and check if the ssh key in the authorized_keys file is the same as the one you have on your local machine in ~/.ssh/id_rsa.pub. If not replace the one from your vagrant authorized_keys with the one on your local machine found in ~/.ssh/id_rsa.pub.
Reload Vagrant :
vagrant reload
Hope this helps someone else. Cheers!
1. Locate the private key in the host:
vagrant ssh-config
#
Output:
Host default
...
Port 2222
...
IdentityFile /home/me/.vagrant.d/[...]/virtualbox/vagrant_private_key
...
2. Store the private key path and the port number in variables:
Use these two commands with the output from above:
pk="/home/me/.vagrant.d/.../virtualbox/vagrant_private_key"
port=2222
#
3. Generate a public key and upload it to the guest machine:
Copy/pasta, no changes needed:
ssh-keygen -y -f $pk > authorized_keys
scp -P $port authorized_keys vagrant#localhost:~/.ssh/
vagrant ssh -c "chmod 600 ~/.ssh/authorized_keys"
rm authorized_keys
#
If you are using windows and this issue come unexpectedly, please try the following code in configuration.
config.ssh.username = 'vagrant'
config.ssh.password = 'vagrant'
config.ssh.insert_key = 'true'
This basically uses the default vagrant configuration.
Mac Solution:
Added local ssh id_rsa key to vagrant private key
vi /Users//.vagrant/machines/default/virtualbox/private_key
/Users//.ssh/id_rsa
copied public key /Users//.ssh/id_rsa.pub on vagrant box authorized_keys
ssh vagrant#localhost -p 2222 (password: vagrant)
ls -la
cd .ssh
chmod 0600 ~/.ssh/authorized_keys
vagrant reload
Problem resolved.
Thanks to
Make sure your first network interface is NAT. The other second network interface can be anything you want when you're building box. Don't forget the Vagrant user, as discussed in the Google thread.
Good luck.
also could not get beyond:
default: SSH auth method: private key
When I used the VirtualBox GUI, it told me there was an OS processor mismatch.
To get vagrant up progressing further, in the BIOS settings I had to counter-intuitively:
Disable: Virtualisation
Enable: VT-X
Try toggling these setting in your BIOS.
First of all you should remove the autogenerated insecure_private_key file, then regenerate this file by typing
vagrant ssh-config
then
vagrant halt
vagrant up
It should work
I resolved the issue in the following manner.
1. Create new SSH key using Git Bash
$ ssh-keygen -t rsa -b 4096 -C "vagrant#localhost"
# Creates a new ssh key, using the provided email as a label
Generating public/private rsa key pair.
When you're prompted to "Enter a file in which to save the key," press Enter. This accepts the default file location.
Enter a file in which to save the key (/Users/[you]/.ssh/id_rsa): [Press enter]
At the prompt, type a secure passphrase. You can leave empty and press enter if you do not need a passphrase.
Enter a file in which to save the key (/Users/[you]/.ssh/id_rsa): [Press enter]
To connect to your Vagrant VM type following command
ssh vagrant#localhost -p 2222
When you get following message type “yes” and press enter.
The authenticity of host 'github.com (192.30.252.1)' can't be established.
RSA key fingerprint is 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48.
Are you sure you want to continue connecting (yes/no)?
Now to establish a SSH connection type : $ vagrant ssh
Copy the host public key into authorized_keys file in Vagrant VM. For that, go to “Users/[you]/.ssh” folder and copy the content in id_rsa.pub file in host machine and past into “~/.ssh/authorized_keys” file in Vagrant VM.
Change permission on SSH folder and authorized_keys file in Vagrant VM
Restart vagrant with : $ vagrant reload
Another simple solution, in windows, go to the file Homestead/Vagrantfile and add these lines to connect with a username/password instead of a private key:
config.ssh.username = "vagrant"
config.ssh.password = "vagrant"
config.ssh.insert_key = false
So, finally part of the file will look like this :
if File.exists? homesteadYamlPath then
settings = YAML::load(File.read(homesteadYamlPath))
elsif File.exists? homesteadJsonPath then
settings = JSON.parse(File.read(homesteadJsonPath))
end
config.ssh.username = "vagrant"
config.ssh.password = "vagrant"
config.ssh.insert_key = false
Homestead.configure(config, settings)
if File.exists? afterScriptPath then
config.vm.provision "shell", path: afterScriptPath, privileged: false
end
Hope this help ..
Just adding my solution:
rm /Users/myusername/.ssh/config
vagrant ssh-config >> /Users/myusername/.ssh/config
Somewhat similar to other proposed solutions here.
Between all of the responses here, there are lots of good things to try. For completeness, if you
ssh vagrant#localhost -p 2222
as #Bizmate suggests, and it fails, be sure you have
AllowUsers vagrant
in the /etc/ssh/sshd_config of your guest/vagrant machine.
I am using Vagrant with a Puphpet setup from May 2015 and had this problem. It appears that the configuration that was generated didn't handle Vagrant 1.7.4 (or maybe a bit earlier?) behavior of regenerating ssh keys if it detects an insecure key.
I solved it by adding the following in my Puphpet generated Vagrantfile (local setup) inside the "if File.file?(customKey)" clause:
config.ssh.insert_key = false
Reference commit
This the all correct steps that I followed for fix this bellow issue occurred when vagrant up command run.
These are the steps that I followed
create a folder. e.g F:\projects
Open this folder in git bash and run this command
ssh-keygen -t rsa -b 4096 -C "your_email#example.com" (put a valid email address)
Then generating key pair in two separate files in the project folder. e.g project(private key file), project.pub (public key file)
Go to this location C:\Users\acer.vagrant.d and find file
insecure_private_key
Get backup of the file and copy the content of newly created private key and paste it in insecure_private_key file. Then copy insecure_private_key and paste it in this location too.
Now vagrant up in your project location. after generating above issue type vagrant ssh and go inside giving username, password. (in default username and password is set as vagrant)
Go inside to this location cd /home/vagrant/.ssh and type mv authorized_keys authorized_keys_bk
Then type ls -al and type vi authorized_keys for open authorized_keys file vi editor.
Open generated public key from notepad++ (project.pub) and copy content
Then press i on git bash to enable insert mode on vi editor and right click and paste. After press escape to get out from insert mode
:wq! for save the file and type ls -al
Then permissions are set like bellow no need to change
drwx------. 2 vagrant vagrant 4096 Feb 13 15:33 .
drwx------. 4 vagrant vagrant 4096 Feb 13 14:04 ..
-rw-------. 1 vagrant vagrant 743 Feb 13 14:26 authorized_keys
-rw-------. 1 root root 409 Feb 13 13:57 authorized_keys_bk
-rw-------. 1 vagrant vagrant 409 Jan 2 23:09 authorized_keys_originial
Otherwise type chmod 600 authorized_keys and type this command too chown vagrant:vagrant authorized_keys
Finally run the vagrant halt and vagrant up again.
************************THIS IS WORK FINE FOR ME*******************************
Just for those people that have been idiots like me, or have had something odd happen to their vagrant machine. This error can also occur when you changed the permissions of the vagrant user's home directory (deliberately or by accident).
You can log in instead (as described in other posts) using the password ('vagrant') and then run the following command to fix the permissions.
sudo chown -R vagrant:vagrant /home/vagrant
Then you should be able to log in again without entering the password.
TL;DR: The permissions on your vagrant home folder are wrong.
Simple:
homestead destroy
homestead up
Edit (Not as simple as first thought):
The issue was that new versions of homestead use php7.0 and some other stuff. To avoid this mess up make sure you set the verison in Homestead.yml:
version: "0"
I solved this problem by running commands on windows 7 CMD as given in this here is the link last post on this thread,
https://github.com/mitchellh/vagrant/issues/6744
Some commands that will reinitialize various network states:
Reset WINSOCK entries to installation defaults : netsh winsock reset catalog
Reset TCP/IP stack to installation defaults : netsh int ip reset reset.log
Flush DNS resolver cache : ipconfig /flushdns
Renew DNS client registration and refresh DHCP leases : ipconfig /registerdns
Flush routing table : route /f
Been beating my head on this for the last couple of days on a repackaged base box. (Mac OS X, El Capitan)
Following #Radek 's procedure I did 'vagrant ssh-config' on the source box and got:
...
/Users/Shared/dev/<source-box-name>/.vagrant/machines/default/virtualbox/private_key
...
On the new copy, that command gave me:
...
IdentityFile /Users/<username>/.vagrant.d/insecure_private_key
...
So, I just added this line in the new copy:
...
config.ssh.private_key_path = "/Users/Shared/dev/<source-box-name>/.vagrant/machines/default/virtualbox/private_key"
...
Not perfect, but I can get on with my life.
Not sure your case is the same as mine though.
In my case vagrant ssh failed in key authentication and asked for password.
I found my old setting below in my ~/.ssh/config (at the top of the file).
PubkeyAcceptedKeyTypes ssh-dss,ssh-rsa
After removing this, key authentication started working. No more password asked.