how to get inside Vault (ssh) with Ansible playbook? - ssh

Im using Vagrant and I want to start vault server from inside the vagrant box, via ansible playbook.
to do so, without playbook, I need to execute # vagrant ssh and then I'm in the vagrant box and can start the vault server using # vault server -dev.
I want to execute the # vault server -dev directly from the playbook. any ideas how?
this is my playbook -
---
- name: Playbook to install and use Vault
become: true
hosts: all
tasks:
- name: Uptade1
become: true
become_user: root
shell: apt update
- name: gpg
become: true
become_user: root
shell: apt install gpg
- name: verify key
become: true
become_user: root
shell: wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg >/dev/null
- name: fingerprint
become: true
become_user: root
shell: gpg --no-default-keyring --keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg --fingerprint
- name: repository
become: true
become_user: root
shell: echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
- name: update2
become: true
become_user: root
shell: apt update
- name: vault install
become: true
become_user: root
shell: apt install vault
- name: start vault
become: true
become_user: vagrant
shell: vault server -dev -dev-listen-address=0.0.0.0:8200
the last one is my try to start the vault server but it gets stuck in the
TASK [start vault] *********************************************************
I also tried adding
- name: start vault
become: true
shell: vagrant ssh
before but then I get :
TASK [start vault] *************************************************************
fatal: [default]: FAILED! => {"changed": true, "cmd": "vagrant ssh", "delta": "0:00:00.003245", "end": "2022-07-03 16:18:31.480702", "msg": "non-zero return code", "rc": 127, "start": "2022-07-03 16:18:31.477457", "stderr": "/bin/sh: 1: vagrant: not found", "stderr_lines": ["/bin/sh: 1: vagrant: not found"], "stdout": "", "stdout_lines": []}
this is my Vagrantfile if needed:
Vagrant.configure("2") do |config|
VAGRANT_DEFAULT_PROVIDER = "virtualbox"
config.vm.hostname = "carebox-idan"
config.vm.provision "ansible", playbook: "playbook.yml"
config.vm.box = "laravel/homestead"
config.vm.network "forwarded_port", guest: 8200, host: 8200, auto_correct: "true"
config.ssh.forward_agent = true
end
thank you.

Related

Run ansible as root with specific sudoers

My issue is that I have one server where the sudoers for the ansible user is like this:
ansible ALL=(root) NOPASSWD: /usr/bin/su - root
Hence, the only way to switch to the root user is:
sudo su - root
When I try to run the below ansible playbook:
---
- name: Configure Local Repo server address
hosts: lab
remote_user: ansible
become: yes
become_user: root
become_method: runas
tasks:
- name: test whoami
become: yes
shell:
cmd: whoami
register: whoami_output
- debug: var=whoami_output
- name: Deploy local.repo file to the hosts
become: yes
copy:
src: /etc/ansible/files/local.repo
dest: /etc/yum.repos.d/local.repo
owner: ansible
group: ansible
mode: 0644
backup: yes
register: deploy_file_output
- debug: var=deploy_file_output
I got the following error:
ansible-playbook --private-key /etc/ansible/keys/ansible_key /etc/ansible/playbooks/local_repo_provisioning.yml
PLAY [Configure Local Repo server address] *****************************************************************************************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************************************************************************************************************
ok: [10.175.65.12]
TASK [test whoami] *****************************************************************************************************************************************************************************************************************************
changed: [10.175.65.12]
TASK [debug] ***********************************************************************************************************************************************************************************************************************************
ok: [10.175.65.12] => {
"whoami_output": {
"changed": true,
"cmd": "whoami",
"delta": "0:00:00.003301",
"end": "2023-01-15 17:53:56.312715",
"failed": false,
"msg": "",
"rc": 0,
"start": "2023-01-15 17:53:56.309414",
"stderr": "",
"stderr_lines": [],
"stdout": "ansible",
"stdout_lines": [
"ansible"
]
}
}
TASK [Deploy local.repo file to the hosts] *****************************************************************************************************************************************************************************************************
fatal: [10.175.65.12]: FAILED! => {"changed": false, "checksum": "2356deb90d20d5f31351c719614d5b5760ab967d", "msg": "Destination /etc/yum.repos.d not writable"}
PLAY RECAP *************************************************************************************************************************************************************************************************************************************
10.175.65.12 : ok=3 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
When I tried to use become_method: sudo I got the "Missing sudo password" message. Further, when I tried become_method: su I got the "Timeout (12s) waiting for privilege escalation prompt:" message.
All in all, would someone know how to explain how ansible runs the commands deppending on the "become_method" set? Is there a way to switch to the root user with that kind of sudoers conf?
Thanks in advance!

Ansible not reporting distribution info on Ubuntu 20.04?

Example on Ubuntu 18.04 reporting distribution info in 'ansible_facts':
$ ansible -i hosts ubuntu1804 -u root -m setup -a "filter=ansible_distribution*"
ubuntu1804 | SUCCESS => {
"ansible_facts": {
"ansible_distribution": "Ubuntu",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/os-release",
"ansible_distribution_file_variety": "Debian",
"ansible_distribution_major_version": "18",
"ansible_distribution_release": "bionic",
"ansible_distribution_version": "18.04"
},
"changed": false
}
Example of same command against Ubuntu 20.04:
$ ansible -i hosts ubuntu2004 -u root -m setup -a "filter=ansible_distribution*"
ubuntu2004 | SUCCESS => {
"ansible_facts": {},
"changed": false
}
Is this an issue with Ubuntu or Ansible? Is there a workaround?
Issue resolved with today's update to ansible 2.9.7.
After very research to find out Ubuntu 20.04 version then we have got released a version using ansible version-2.5.1
- hosts: localhost
become: true
gather_facts: yes
tasks:
- name: System details
debug:
msg: "{{ ansible_facts['lsb']['release'] }}"
- name: ubuntu 18
shell: echo "hello 18"
register: ub18
when: ansible_facts['lsb']['release'] == "18.04"
- debug:
msg: "{{ ub18 }}"
- name: ubuntu 20
shell: echo "hello 20"
register: ub20
when: ansible_facts['lsb']['release'] == "20.04"
- debug:
msg: "{{ ub20 }}"

Ansible: setting user on dynamic ec2

I don't appear to be connecting to the remote host. Why not?
Command-line: ansible-playbook -i "127.0.0.1," -c local playbook.yml
This is the playbook. The role, create_ec2_instance, creates the variable ec2hosts used within the second portion of the playbook (ansible/playbook.yml):
# Create instance
- hosts: 127.0.0.1
connection: local
gather_facts: false
roles:
- create_ec2_instance
# Configure and install all we need
- hosts: ec2hosts
remote_user: admin
gather_facts: false
roles:
- show-hosts
- prepare-target-system
- install-project-dependencies
- install-project
This is just a simple ec2 module creation. This works as desired. (ansible/roles/create-ec2-instance/tasks/main.yml):
- name: Create instance
ec2:
region: "{{ instance_values['region'] }}"
zone: "{{ instance_values['zone'] }}"
keypair: "{{ instance_values['key_pair'] }}"
group: "{{ instance_values['security_groups'] }}"
instance_type: "{{ instance_values['instance_type'] }}"
image: "{{ instance_values['image_id'] }}"
count_tag: "{{ instance_values['name'] }}"
exact_count: 1
wait: yes
instance_tags:
Name: "{{ instance_values['name'] }}"
when: ec2_instances.instances[instance_values['name']]|default("") == ""
register: ec2_info
- name: Wait for instances to listen on port 22
wait_for:
state: started
host: "{{ ec2_info.instances[0].public_dns_name }}"
port: 22
when: ec2_info|changed
- name: Add new instance to ec2hosts group
add_host:
hostname: "{{ ec2_info.instances[0].public_ip }}"
groupname: ec2hosts
instance_id: "{{ ec2_info.instances[0].id }}"
when: ec2_info|changed
I've included extra methods for transparency, though these are really basic (ansible/roles/show-hosts/tasks/main.yml):
- name: List hosts
debug: msg="groups={{groups}}"
run_once: true
and we have (ansible/roles/prepare-target-system/tasks/main.yml):
- name: get the username running the deploy
local_action: command whoami
register: username_on_the_host
- debug: var=username_on_the_host
- name: Add necessary system packages
become: yes
become_method: sudo
package: "name={{item}} state=latest"
with_items:
- software-properties-common
- python-software-properties
- devscripts
- build-essential
- libffi-dev
- libssl-dev
- vim
Edit: I've updated to remote_user above and below is the error output:
TASK [prepare-target-system : debug] *******************************************
task path: <REDACTED>/ansible/roles/prepare-target-system/tasks/main.yml:5
ok: [35.166.52.247] => {
"username_on_the_host": {
"changed": true,
"cmd": [
"whoami"
],
"delta": "0:00:00.009067",
"end": "2017-01-07 08:23:42.033551",
"rc": 0,
"start": "2017-01-07 08:23:42.024484",
"stderr": "",
"stdout": "brianbruggeman",
"stdout_lines": [
"brianbruggeman"
],
"warnings": []
}
}
TASK [prepare-target-system : Ensure that we can update apt-repository] ********
task path: /<REDACTED>/ansible/roles/prepare-target-system/tasks/Debian.yml:2
Using module file <REDACTED>/.envs/dg2/lib/python2.7/site-packages/ansible/modules/core/packaging/os/apt.py
<35.166.52.247> ESTABLISH LOCAL CONNECTION FOR USER: brianbruggeman
<35.166.52.247> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769 `" && echo ansible-tmp-1483799022.33-268449475843769="` echo $HOME/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769 `" ) && sleep 0'
<35.166.52.247> PUT /var/folders/r9/kv1j05355r34570x2f5wpxpr0000gn/T/tmpK2__II TO <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/apt.py
<35.166.52.247> EXEC /bin/sh -c 'chmod u+x <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/ <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/apt.py && sleep 0'
<35.166.52.247> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-owktjrfvqssjrqcetaxjkwowkzsqfitq; /usr/bin/python <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/apt.py; rm -rf "<REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/" > /dev/null 2>&1'"'"' && sleep 0'
failed: [35.166.52.247] (item=[u'software-properties-common', u'python-software-properties', u'devscripts', u'build-essential', u'libffi-dev', u'libssl-dev', u'vim']) => {
"failed": true,
"invocation": {
"module_name": "apt"
},
"item": [
"software-properties-common",
"python-software-properties",
"devscripts",
"build-essential",
"libffi-dev",
"libssl-dev",
"vim"
],
"module_stderr": "sudo: a password is required\n",
"module_stdout": "",
"msg": "MODULE FAILURE"
}
to retry, use: --limit #<REDACTED>/ansible/<redacted playbook>.retry
PLAY RECAP *********************************************************************
127.0.0.1 : ok=6 changed=2 unreachable=0 failed=0
35.166.52.247 : ok=3 changed=1 unreachable=0 failed=1
Use become:
remote_user: ansible
become: true
become_user: root
Ansible docs: Become (Privilege Escalation)
For example: in my scripts i connect to remote host as user 'ansible' (because ssh is disabled for root), and then become 'root'. Rarely, i connect as 'ansible', then become 'apache' user. So, remote_user specify username to connect, become_user is username after connection.
PS Passwordless sudo for user ansible:
- name: nopasswd sudo for ansible user
lineinfile: "dest=/etc/sudoers state=present regexp='^{{ ansible_user }}' line='{{ ansible }} ALL=(ALL) NOPASSWD: ALL'"
This is known workaround, see here: Specify sudo password for Ansible

Ansible and ForwardAgent for sudo_user

Could someone say me, what am I doing wrong? I'm working with Amazon EC2 instance and want to have agent forwarded to user rails, but when I run next task:
- acl: name={{ item }} etype=user entity=rails permissions=rwx state=present
with_items:
- "{{ ansible_env.SSH_AUTH_SOCK|dirname }}"
- "{{ ansible_env.SSH_AUTH_SOCK }}"
sudo: true
I see failed result:
(item=/tmp/ssh-ULvzaZpq2U) => {"failed": true, "item": "/tmp/ssh-ULvzaZpq2U"}
msg: path not found or not accessible!
When I try to it manually, without ansible, it looks good:
setfacl -m rails:rwx "$SSH_AUTH_SOCK"
setfacl -m rails:x $(dirname "$SSH_AUTH_SOCK")
sudo -u rails ssh -T git#github.com //Hi KELiON! You've successfully authenticated, but GitHub does not provide shell access.
I even tried to run new instance and run test ansible playbook:
#!/usr/bin/env ansible-playbook
---
- hosts: all
remote_user: ubuntu
tasks:
- user: name=rails
sudo: true
- name: Add ssh agent line to sudoers
lineinfile:
dest: /etc/sudoers
state: present
regexp: SSH_AUTH_SOCK
line: Defaults env_keep += "SSH_AUTH_SOCK"
sudo: true
- acl: name={{ item }} etype=user entity=rails permissions=rwx state=present
with_items:
- "{{ ansible_env.SSH_AUTH_SOCK|dirname }}"
- "{{ ansible_env.SSH_AUTH_SOCK }}"
sudo: true
- name: Test that git ssh connection is working.
command: ssh -T git#github.com
sudo: true
sudo_user: rails
ansible.cfg is:
[ssh_connection]
pipelining=True
ssh_args=-o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s
[defaults]
sudo_flags=-HE
hostfile=staging
But the same result. Any ideas?
I had the same issue and found the answer at https://github.com/ansible/ansible/issues/7235#issuecomment-45842303
My solution varied a bit from his, because acl didn’t work for me, so I:
Changed ansible.cfg:
[defaults]
sudo_flags=-HE
[ssh_connection]
# COMMENTED OUT: ssh_args = -o ForwardAgent=yes
Added tasks/ssh_agent_hack.yml containing:
- name: "(ssh-agent hack: grant access to {{ deploy_user }})"
# SSH-agent socket is forwarded for the current user only (0700 file). Let's change it
# See: https://github.com/ansible/ansible/issues/7235#issuecomment-45842303
# See: http://serverfault.com/questions/107187/ssh-agent-forwarding-and-sudo-to-another-user
become: false
file: group={{deploy_user}} mode=g+rwx path={{item}}
with_items:
- "{{ ansible_env.SSH_AUTH_SOCK|dirname }}"
- "{{ ansible_env.SSH_AUTH_SOCK }}"
NOTE - the become: false setting is because I ssh in as root - If you ssh in as something else, then you will need to become root to do the fix, and then below become your deploy_user (if it isnt the user you are ssh'ing in as).
And then called it from my deploy.yml playbook:
- hosts: apps
gather_facts: True
become: True
become_user: "{{deploy_user}}"
pre_tasks:
- include: tasks/ssh_agent_hack.yml
tags: [ 'deploy' ]
roles:
- { role: carlosbuenosvinos.ansistrano-deploy, tags: [ 'deploy' ] }
Side note - Adding ForwardAgent yes to the host entry in ~/.ssh/config didn't affect what worked (I tried all 8 combinations :- only setting sudo_flags but not ssh_args works but it doesn't matter if you set forwarding on or off in ~/.ssh/config for opensssh - tested under ubuntu trusty)
Also note: I have pipelining=True in ansible.cfg
This worked for me in ansible v2.3.0.0:
$ vi ansible.cfg
[defaults]
roles_path = ./roles
retry_files_enabled = False
[ssh_connection]
ssh_args=-o ControlMaster=auto -o ControlPersist=60s -o ControlPath=/tmp/ansible-ssh-%h-%p-%r -o ForwardAgent=yes
$ vi roles/pull-code/tasks/main.yml
- name: '(Hack: keep SSH forwarding socket)'
lineinfile:
dest: /etc/sudoers
insertafter: '^#?\s*Defaults\s+env_keep\b'
line: 'Defaults env_keep += "SSH_AUTH_SOCK"'
- name: '(Hack: grant access to the socket to {{app_user}})'
become: false
acl: name='{{item}}' etype=user entity='{{app_user}}' permissions="rwx" state=present
with_items:
- "{{ ansible_env.SSH_AUTH_SOCK|dirname }}"
- "{{ ansible_env.SSH_AUTH_SOCK }}"
- name: Pull the code
become: true
become_user: '{{app_user}}'
git:
repo: '{{repository}}'
dest: '{{code_dest}}'
accept_hostkey: yes
I know this answer is late to the party, but the other answers seemed a bit overly complicated when I distilled my solution to the bare minimum. Here's an example playbook to clone a git repo that requires authentication for access via ssh:
- hosts: all
connection: ssh
vars:
# forward agent so access to git via ssh works
ansible_ssh_extra_args: '-o ForwardAgent=yes'
utils_repo: "git#git.example.com:devops/utils.git"
utils_dir: "/opt/utils"
tasks:
- name: Install Utils
git:
repo: "{{ utils_repo }}"
dest: "{{ utils_dir }}"
update: true
accept_hostkey: yes
become: true
become_method: sudo
# Need this to ensure we have the SSH_AUTH_SOCK environment variable
become_flags: '-HE'

How do I add my own public key to Vagrant VM?

I got a problem with adding an ssh key to a Vagrant VM. Basically the setup that I have here works fine. Once the VMs are created, I can access them via vagrant ssh, the user "vagrant" exists and there's an ssh key for this user in the authorized_keys file.
What I'd like to do now is: to be able to connect to those VMs via ssh or use scp. So I would only need to add my public key from id_rsa.pub to the authorized_keys - just like I'd do with ssh-copy-id.
Is there a way to tell Vagrant during the setup that my public key should be included? If not (which is likely, according to my google results), is there a way to easily append my public key during the vagrant setup?
You can use Ruby's core File module, like so:
config.vm.provision "shell" do |s|
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
s.inline = <<-SHELL
echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys
echo #{ssh_pub_key} >> /root/.ssh/authorized_keys
SHELL
end
This working example appends ~/.ssh/id_rsa.pub to the ~/.ssh/authorized_keys of both the vagrant and root user, which will allow you to use your existing SSH key.
Copying the desired public key would fall squarely into the provisioning phase. The exact answer depends on what provisioning you fancy to use (shell, Chef, Puppet etc). The most trivial would be a file provisioner for the key, something along this:
config.vm.provision "file", source: "~/.ssh/id_rsa.pub", destination: "~/.ssh/me.pub"
Well, actually you need to append to authorized_keys. Use the the shell provisioner, like so:
Vagrant.configure(2) do |config|
# ... other config
config.vm.provision "shell", inline: <<-SHELL
cat /home/vagrant/.ssh/me.pub >> /home/vagrant/.ssh/authorized_keys
SHELL
# ... other config
end
You can also use a true provisioner, like Puppet. For example see Managing SSH Authorized Keys with Puppet.
There's a more "elegant" way of accomplishing what you want to do. You can find the existing private key and use it instead of going through the trouble of adding your public key.
Proceed like this to see the path to existing private key (look below for IdentityFile):
run vagrant ssh-config
result:
$ vagrant ssh-config
Host magento2.vagrant150
HostName 127.0.0.1
User vagrant
Port 3150
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile "/Users/madismanni/m2/vagrant-magento/.vagrant/machines/magento2.vagrant150/virtualbox/private_key"
IdentitiesOnly yes
LogLevel FATAL
Then you can use the private key like this, note also the switch for switching off password authentication
ssh -i /Users/madismanni/m2/vagrant-magento/.vagrant/machines/magento2.vagrant150/virtualbox/private_key -o PasswordAuthentication=no vagrant#127.0.0.1 -p 3150
This excellent answer was added by user76329 in a rejected Suggested Edit
Expanding on Meow's example, we can copy the local pub/private ssh keys, set permissions, and make the inline script idempotent (runs once and will only repeat if the test condition fails, thus needing provisioning):
config.vm.provision "shell" do |s|
ssh_prv_key = ""
ssh_pub_key = ""
if File.file?("#{Dir.home}/.ssh/id_rsa")
ssh_prv_key = File.read("#{Dir.home}/.ssh/id_rsa")
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
else
puts "No SSH key found. You will need to remedy this before pushing to the repository."
end
s.inline = <<-SHELL
if grep -sq "#{ssh_pub_key}" /home/vagrant/.ssh/authorized_keys; then
echo "SSH keys already provisioned."
exit 0;
fi
echo "SSH key provisioning."
mkdir -p /home/vagrant/.ssh/
touch /home/vagrant/.ssh/authorized_keys
echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys
echo #{ssh_pub_key} > /home/vagrant/.ssh/id_rsa.pub
chmod 644 /home/vagrant/.ssh/id_rsa.pub
echo "#{ssh_prv_key}" > /home/vagrant/.ssh/id_rsa
chmod 600 /home/vagrant/.ssh/id_rsa
chown -R vagrant:vagrant /home/vagrant
exit 0
SHELL
end
A shorter and more correct code should be:
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
config.vm.provision 'shell', inline: 'mkdir -p /root/.ssh'
config.vm.provision 'shell', inline: "echo #{ssh_pub_key} >> /root/.ssh/authorized_keys"
config.vm.provision 'shell', inline: "echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys", privileged: false
Otherwise user's .ssh/authorized_keys will belong to root user.
Still it will add a line at every provision run, but Vagrant is used for testing and a VM usually have short life, so not a big problem.
I end up using code like:
config.ssh.forward_agent = true
config.ssh.insert_key = false
config.ssh.private_key_path = ["~/.vagrant.d/insecure_private_key","~/.ssh/id_rsa"]
config.vm.provision :shell, privileged: false do |s|
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
s.inline = <<-SHELL
echo #{ssh_pub_key} >> /home/$USER/.ssh/authorized_keys
sudo bash -c "echo #{ssh_pub_key} >> /root/.ssh/authorized_keys"
SHELL
end
Note that we should not hard code path to /home/vagrant/.ssh/authorized_keys since some vagrant boxes not using the vagrant username.
None of the older posts worked for me although some came close. I had to make rsa keys with keygen in the terminal and go with custom keys. In other words defeated from using Vagrant's keys.
I'm on Mac OS Mojave as of the date of this post. I've setup two Vagrant boxes in one Vagrantfile. I'm showing all of the first box so newbies can see the context. I put the .ssh folder in the same folder as the Vagrant file, otherwise use user9091383 setup.
Credit for this solution goes to this coder.
Vagrant.configure("2") do |config|
config.vm.define "pfbox", primary: true do |pfbox|
pfbox.vm.box = "ubuntu/xenial64"
pfbox.vm.network "forwarded_port", host: 8084, guest: 80
pfbox.vm.network "forwarded_port", host: 8080, guest: 8080
pfbox.vm.network "forwarded_port", host: 8079, guest: 8079
pfbox.vm.network "forwarded_port", host: 3000, guest: 3000
pfbox.vm.provision :shell, path: ".provision/bootstrap.sh"
pfbox.vm.synced_folder "ubuntu", "/home/vagrant"
pfbox.vm.provision "file", source: "~/.gitconfig", destination: "~/.gitconfig"
pfbox.vm.network "private_network", type: "dhcp"
pfbox.vm.network "public_network"
pfbox.ssh.insert_key = false
ssh_key_path = ".ssh/" # This may not be necessary. I may remove.
pfbox.vm.provision "shell", inline: "mkdir -p /home/vagrant/.ssh"
pfbox.ssh.private_key_path = ["~/.vagrant.d/insecure_private_key", ".ssh/id_rsa"]
pfbox.vm.provision "file", source: ".ssh/id_rsa.pub", destination: ".ssh/authorized_keys"
pfbox.vm.box_check_update = "true"
pfbox.vm.hostname = "pfbox"
# VirtualBox
config.vm.provider "virtualbox" do |vb|
# vb.gui = true
vb.name = "pfbox" # friendly name for Oracle VM VirtualBox Manager
vb.memory = 2048 # memory in megabytes 2.0 GB
vb.cpus = 1 # cpu cores, can't be more than the host actually has.
end
end
config.vm.define "dbbox" do |dbbox|
...
This is an excellent thread that helped me solve a similar situation as the original poster describes.
While I ultimately used the settings/logic presented in smartwjw’s answer, I ran into a hitch since I use the VAGRANT_HOME environment variable to save the core vagrant.d directory stuff on an external hard drive on one of my development systems.
So here is the adjusted code I am using in my Vagrantfile to accommodate for a VAGRANT_HOME environment variable being set; the “magic” happens in this line vagrant_home_path = ENV["VAGRANT_HOME"] ||= "~/.vagrant.d":
config.ssh.insert_key = false
config.ssh.forward_agent = true
vagrant_home_path = ENV["VAGRANT_HOME"] ||= "~/.vagrant.d"
config.ssh.private_key_path = ["#{vagrant_home_path}/insecure_private_key", "~/.ssh/id_rsa"]
config.vm.provision :shell, privileged: false do |shell_action|
ssh_public_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
shell_action.inline = <<-SHELL
echo #{ssh_public_key} >> /home/$USER/.ssh/authorized_keys
SHELL
end
For the inline shell provisioners - it is common for a public key to contains spaces, comments, etc. So make sure to put (escaped) quotes around the var that expands to the public key:
config.vm.provision 'shell', inline: "echo \"#{ssh_pub_key}\" >> /home/vagrant/.ssh/authorized_keys", privileged: false
A pretty complete example, hope this helps someone who visits next. Moved all the concrete values to external config files. IP assignment is just for trying out.
# -*- mode: ruby -*-
# vi: set ft=ruby :
require 'yaml'
vmconfig = YAML.load_file('vmconfig.yml')
=begin
Script to created VMs with public IPs, VM creation governed by the provided
config file.
All Vagrant configuration is done below. The "2" in Vagrant.configure
configures the configuration version (we support older styles for
backwards compatibility). Please don't change it unless you know what
you're doing
Default user `vagrant` is created and ssh key is overridden. make sure to have
the files `vagrant_rsa` (private key) and `vagrant_rsa.pub` (public key) in the
path `./.ssh/`
Same files need to be available for all the users you want to create in each of
these VMs
=end
uid_start = vmconfig['uid_start']
ip_start = vmconfig['ip_start']
vagrant_private_key = Dir.pwd + '/.ssh/vagrant_rsa'
guest_sshkeys = '/' + Dir.pwd.split('/')[-1] + '/.ssh/'
Vagrant.configure('2') do |config|
vmconfig['machines'].each do |machine|
config.vm.define "#{machine}" do |node|
ip_start += 1
node.vm.box = vmconfig['vm_box_name']
node.vm.box_version = vmconfig['vm_box_version']
node.vm.box_check_update = false
node.vm.boot_timeout = vmconfig['vm_boot_timeout']
node.vm.hostname = "#{machine}"
node.vm.network "public_network", bridge: "#{vmconfig['bridge_name']}", auto_config: false
node.vm.provision "shell", run: "always", inline: "ifconfig #{vmconfig['ethernet_device']} #{vmconfig['public_ip_part']}#{ip_start} netmask #{vmconfig['subnet_mask']} up"
node.ssh.insert_key = false
node.ssh.private_key_path = ['~/.vagrant.d/insecure_private_key', "#{vagrant_private_key}"]
node.vm.provision "file", source: "#{vagrant_private_key}.pub", destination: "~/.ssh/authorized_keys"
node.vm.provision "shell", inline: <<-EOC
sudo sed -i -e "\\#PasswordAuthentication yes# s#PasswordAuthentication yes#PasswordAuthentication no#g" /etc/ssh/sshd_config
sudo systemctl restart sshd.service
EOC
vmconfig['users'].each do |user|
uid_start += 1
node.vm.provision "shell", run: "once", privileged: true, inline: <<-CREATEUSER
sudo useradd -m -s /bin/bash -U #{user} -u #{uid_start}
sudo mkdir /home/#{user}/.ssh
sudo cp #{guest_sshkeys}#{user}_rsa.pub /home/#{user}/.ssh/authorized_keys
sudo chown -R #{user}:#{user} /home/#{user}
sudo su
echo "%#{user} ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/#{user}
exit
CREATEUSER
end
end
end
It's rather an old Question but maybe this would help someone nowadays, hopefully.
What works like a charm for me is:
Vagrant.configure("2") do |config|
config.vm.box = "debian/bullseye64"
config.vm.define "debian-1"
config.vm.hostname = "debian-1"
# config.vm.network "private_network", ip: "192.168.56.2" # this enables Internal network mode for VirtualBox
config.vm.network "private_network", type: "dhcp" # this enables Host-only network mode for VirtualBox
config.vm.network "forwarded_port", guest: 8081, host: 8081 # with this you can hit http://mypc:8081 to load the web service configured in the vm..
config.ssh.host = "mypc" # use the base host's hostname.
config.ssh.insert_key = true # do not use the global public image key.
config.ssh.forward_agent = true # have already the agent keys preconfigured for ease.
config.vm.provision "ansible" do |ansible|
ansible.playbook = "../../../ansible/playbooks/configurations.yaml"
ansible.inventory_path = "../../../ansible/inventory/hosts.ini"
ansible.extra_vars = {
nodes: "#{config.vm.hostname}",
username: "vagrant"
}
ansible.ask_vault_pass = true
end
end
Then my Ansible provisioner playbook/role configurations.yaml contains this:
- name: Create .ssh folder if not exists
file:
state: directory
path: "{{ ansible_env.HOME }}/.ssh"
- name: Add authorised key (for remote connection)
authorized_key:
state: present
user: "{{ username }}"
key: "{{ lookup('file', 'eos_id_rsa.pub') }}"
- name: Add public SSH key in ~/.ssh
copy:
src: eos_id_rsa.pub
dest: "{{ ansible_env.HOME }}/.ssh"
owner: "{{ username }}"
group: "{{ username }}"
- name: Add private SSH key in ~/.ssh
copy:
src: eos_id_rsa
dest: "{{ ansible_env.HOME }}/.ssh"
owner: "{{ username }}"
group: "{{ username }}"
mode: 0600
Madis Maenni answer is closest to best solution:
just do:
vagrant ssh-config >> ~/.ssh/config
chmod 600 ~/.ssh/config
then you can just ssh via hostname.
To get list of hostnames configured in ~/.ssh/config
grep -E '^Host ' ~/.ssh/config
My example:
$ grep -E '^Host' ~/.ssh/config
Host web
Host db
$ ssh web
[vagrant#web ~]$
Generate a rsa key pair for vagrant authentication ssh-keygen -f ~/.ssh/vagrant
You might also want to add the vagrant identity files to your ~/.ssh/config
IdentityFile ~/.ssh/vagrant
IdentityFile ~/.vagrant.d/insecure_private_key
For some reason we can't just specify the key we want to insert so we take a
few extra steps to generate a key ourselves. This way we get security and
knowledge of exactly which key we need (+ all vagrant boxes will get the same key)
Can't ssh to vagrant VMs using the insecure private key (vagrant 1.7.2)
How do I add my own public key to Vagrant VM?
config.ssh.insert_key = false
config.ssh.private_key_path = ['~/.ssh/vagrant', '~/.vagrant.d/insecure_private_key']
config.vm.provision "file", source: "~/.ssh/vagrant.pub", destination: "/home/vagrant/.ssh/vagrant.pub"
config.vm.provision "shell", inline: <<-SHELL
cat /home/vagrant/.ssh/vagrant.pub >> /home/vagrant/.ssh/authorized_keys
mkdir -p /root/.ssh
cat /home/vagrant/.ssh/authorized_keys >> /root/.ssh/authorized_keys
SHELL