Can't ping Terraform created droplets with Ansible - ssh

Using Terraform I have created 3 droplets on DigitalOcean. While doing it, in folder I have been writing SSH key and creating inventory.txt file.
Here is how it look in Terraform code:
resource "local_file" "servers_ipv4" {
content = join("\n", [
for idx, s in module.openvpn_do_infrastructure_module.servers_ipv4:
<<EOT
${var.droplet_names[idx]} ansible_host=${s} ansible_user=root ansible_ssh_private_key=openvpn_do_ssh.key
EOT
])
filename = "${path.module}/ansible/inventory.txt"
}
resource "local_file" "ssh_keys" {
content = module.openvpn_do_infrastructure_module.ssh_keys
filename = "${path.module}/ansible/openvpn_do_ssh.key"
}
Then, I have ansible folder. After execution of the script and creating droplets in this folder I have 3 files. First file, is just ansible.cfg:
[defaults]
host_key_checking = false
inventory = ./inventory.txt
The other 2 are created by Terraform. It's SSH key - openvpn_do_ssh.key and inventory.txt:
certificate-authority-server ansible_host=123.123.123.121 ansible_user=root ansible_ssh_private_key=openvpn_do_ssh.key
openvpn-server ansible_host=123.123.123.122 ansible_user=root ansible_ssh_private_key=openvpn_do_ssh.key
nextcloud-server ansible_host=123.123.123.123 ansible_user=root ansible_ssh_private_key=openvpn_do_ssh.key
And here is the problem. When I do ansible all -m ping, I get errors:
certificate-authority-server | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: root#123.123.123.121: Permission denied (publickey).",
"unreachable": true
}
nextcloud-server | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: root#123.123.123.122: Permission denied (publickey).",
"unreachable": true
}
openvpn-server | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: root#123.123.123.123: Permission denied (publickey).",
"unreachable": true
}
Also, I can connect to those droplets with SSH and everything is just fine. Even when I change permission to .key file, I still have the same error. I was trying to get more logs with -vvv flags, and here is the most interesting info I found:
ESTABLISH SSH CONNECTION FOR USER: root
...
<123.123.123.121> (255, b'', b"Warning: Permanently added '123.123.123.121' (ED25519) to the list of known hosts.\r\nroot#123.123.123.121: Permission denied (publickey).\r\n")
<123.123.123.121> (255, b'', b'root#123.123.123.121: Permission denied (publickey).\r\n')

I have solved this problem. This is what has helped me:
First of all, I have changed the extension of SSH key file from .key to .pem.
To ansible.cfg I have added next line:
[defaults]
host_key_checking = false
inventory = ./inventory.txt
inventory = ./inventory.txt
private_key_file = ./openvpn_do_ssh.pem
The last thing I have done, is adding read-only file_permission for SSH key.
resource "local_file" "ssh_keys" {
content = module.openvpn_do_infrastructure_module.ssh_keys
filename = "${path.module}/ansible/openvpn_do_ssh.pem"
content = module.openvpn_do_infrastructure_module.ssh_keys
filename = "${path.module}/ansible/openvpn_do_ssh.pem"
file_permission = "0400"
}
Hope it can help someone...

Related

How to use passphrase protected private ssh key in terraform?

I am following this tutorial, https://www.digitalocean.com/community/tutorials/how-to-use-ansible-with-terraform-for-configuration-management, to learn Terraform and Ansible.
When I execute terraform apply, it throws an error:
digitalocean_droplet.web[2]: Provisioning with 'remote-exec'...
Error: Failed to parse ssh private key: ssh: this private key is passphrase protected
Error: Error creating droplet: POST https://api.digitalocean.com/v2/droplets: 422 Failed to resolve VPC
on droplets.tf line 1, in resource "digitalocean_droplet" "web":
1: resource "digitalocean_droplet" "web" {
This is the code:
provisioner "remote-exec" {
inline = ["sudo apt update", "sudo apt install python3 -y", "echo DONE!"]
connection {
host = self.ipv4_address
type = "ssh"
user = "root"
private_key = file(var.pvt_key)
}
}
That private SSH key (~/.ssh/id_rsa) on my machine is passphrase protected. How do I use it?
You can add the desired ssh key to the ssh-agent with ssh-add ~/.ssh/id_rsa and then set the agent field in connection stanza to:
connection {
host = self.ipv4_address
type = "ssh"
user = "root"
agent = true
}

proxycommand doesnt seem to work with ansible and my environment

I've tried many combinations to get this to work but cannot for some reason. I am not using keys in our environment so passwords will have to do.
I've tried proxyjump and sshuttle as well.
It's strange has the ping module works but when trying another module or playbook it doesn't work.
Rough set up is:
laptop running ubuntu with ansible installed
[laptop] ---> [productionjumphost] ---> [production_iosxr_router]
ansible.cfg:
[ssh_connection]
pipelining = True
ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ControlPath=/tmp/ansible-%r#%h:%p -F ssh.config
~/.ssh/config == ssh.cfg: (configured both)
Host modeljumphost
HostName modeljumphost.fqdn.com.au
User user
Port 22
Host productionjumphost
HostName productionjumphost.fqdn.com.au
User user
Port 22
Host model_iosxr_router
HostName model_iosxr_router
User user
ProxyCommand ssh -W %h:22 modeljumphost
Host production_iosxr_router
HostName production_iosxr_router
User user
ProxyCommand ssh -W %h:22 productionjumphost
inventory:
[local]
192.168.xxx.xxx
[router]
production_iosxr_router ansible_connection=network_cli ansible_user=user ansible_ssh_pass=password
[router:vars]
ansible_network_os=iosxr
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q user#productionjumphost.fqdn.com.au"'
ansible_user=user
ansible_ssh_pass=password
playbook.yml:
---
- name: Network Getting Started First Playbook
hosts: router
gather_facts: no
connection: network_cli
tasks:
- name: show version
iosxr_command:
commands: show version
I can run an ad-hoc ansible command and a successful ping is returned:
result: ansible production_iosxr_router -i inventory -m ping -vvvvv
production_iosxr_router | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"invocation": {
"module_args": {
"data": "pong"
}
},
"ping": "pong"
}
running playbook: ansible-playbook -i inventory playbook.yml -vvvvv
production_iosxr_router | FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"msg": "[Errno -2] Name or service not known"
}

Remote-exec provisioner on gcp not connecting with host

I'm trying to use remote-exec provisioner for a use-case related to my project on GCP using Terraform version 12, based on the format specified in terraform docs I get a known hosts key mismatch error after the provisioner timesout.
resource "google_compute_instance" "secondvm" {
name = "secondvm"
machine_type = "n1-standard-1"
zone = "us-central1-a"
boot_disk {
initialize_params {
image = "centos-7-v20190905"
}
}
network_interface {
network = "default"
access_config {
nat_ip = google_compute_address.second.address
network_tier = "PREMIUM"
}
}
#metadata = {
#ssh-keys = "root:${file("~/.ssh/id_rsa.pub")}"
#}
metadata_startup_script = "cd /; touch makefile.txt; sudo echo \"string xyz bgv\" >>./makefile.txt"
provisioner "remote-exec" {
inline = [
"sudo sed -i 's/xyz/google_compute_address.first.address/gI' /makefile.txt"
]
connection {
type = "ssh"
#port = 22
host = self.network_interface[0].access_config[0].nat_ip
user = "root"
timeout = "120s"
#agent = false
private_key = file("~/.ssh/id_rsa")
#host_key = file("~/.ssh/google_compute_engine.pub")
host_key = file("~/.ssh/id_rsa.pub")
}
}
depends_on = [google_compute_address.second]
}
I'm not sure what exactly I'm doing wrong with the keys here but the error I get is
google_compute_instance.secondvm: Still creating... [2m10s elapsed]
google_compute_instance.secondvm (remote-exec): Connecting to remote host via SSH...
google_compute_instance.secondvm (remote-exec): Host: 104.155.186.128
google_compute_instance.secondvm (remote-exec): User: root
google_compute_instance.secondvm (remote-exec): Password: false
google_compute_instance.secondvm (remote-exec): Private key: true
google_compute_instance.secondvm (remote-exec): Certificate: false
google_compute_instance.secondvm (remote-exec): SSH Agent: false
google_compute_instance.secondvm (remote-exec): Checking Host Key: true
google_compute_instance.secondvm: Still creating... [2m20s elapsed]
Error: timeout - last error: SSH authentication failed (root#104.155.186.128:22): ssh: handshake failed: knownhosts: key mismatch

Terraform remote-exec on windows with ssh

I have setup a Windows server and installed ssh using Chocolatey. If I run this manually I have no problems connecting and running my commands. When I try to use Terraform to run my commands it connects successfully but doesn't run any commands.
I started by using winrm and then I could run commands but due to some problem with creating a service fabric cluster over winrm I decided to try using ssh instead and when running things manually it worked and the cluster went up. So that seems to be the way forward.
I have setup a Linux VM and got ssh working by using the private key. So I have tried to use the same config as I did with the Linux VM on the Windows but it still asked me to use my password.
What could the reason be for being able to run commands over ssh manually and using Terraform only connect but no commands are run? I am running this on OpenStack with Windows 2016
null_resource.sf_cluster_install (remote-exec): Connecting to remote host via SSH...
null_resource.sf_cluster_install (remote-exec): Host: 1.1.1.1
null_resource.sf_cluster_install (remote-exec): User: Administrator
null_resource.sf_cluster_install (remote-exec): Password: true
null_resource.sf_cluster_install (remote-exec): Private key: false
null_resource.sf_cluster_install (remote-exec): SSH Agent: false
null_resource.sf_cluster_install (remote-exec): Checking Host Key: false
null_resource.sf_cluster_install (remote-exec): Connected!
null_resource.sf_cluster_install: Creation complete after 4s (ID: 5017581117349235118)
Here is the script im using to run the commands:
resource "null_resource" "sf_cluster_install" {
# count = "${local.sf_count}"
depends_on = ["null_resource.copy_sf_package"]
# Changes to any instance of the cluster requires re-provisioning
triggers = {
cluster_instance_ids = "${openstack_compute_instance_v2.sf_servers.0.id}"
}
connection = {
type = "ssh"
host = "${openstack_networking_floatingip_v2.sf_floatIP.0.address}"
user = "Administrator"
# private_key = "${file("~/.ssh/id_rsa")}"
password = "${var.admin_pass}"
}
provisioner "remote-exec" {
inline = [
"echo hello",
"powershell.exe Write-Host hello",
"powershell.exe New-Item C:/tmp/hello.txt -type file"
]
}
}
Put the connection block inside the provisioner block:
provisioner "remote-exec" {
connection = {
type = "ssh"
...
}
inline = [
"echo hello",
"powershell.exe Write-Host hello",
"powershell.exe New-Item C:/tmp/hello.txt -type file"
]
}

ssh connection refused when deploying meteor app from nitrous.io to Linode server using meteor up

See https://github.com/arunoda/meteor-up/issues/171
I am trying to deploy my meteor app from my nitrous box to a remote server in Linode.
I follow the instruction in meteor up and got
Invalid mup.json file: Server username does not exit
mup.json
// Server authentication info
"servers": [
{
"host": "123.456.78.90",
// "username": "root",
// or pem file (ssh based authentication)
"pem": "~/.ssh/id_rsa",
"sshOptions": { "Port": 1024 }
}
]
So I uncomment the username: "roote line in mup.json and I did mup logs -n 300 and got the following error:
[123.456.78.90] ssh: connect to host 123.456.78.90 port 1024: Connection refused
I suspect I may did something wrong in setting up the SSH key. I can access my remote server without password after setting up my ssh key in ~/.ssh/authorized_keys.
The content of the authorized_keys looks like this:
ssh-rsa XXXXXXXXXX..XXXX== root#apne1.nitrousbox.com
Do you guys have any ideas of what went wrong?
Problem solved by uncommenting the username and changing the port to 22:
// Server authentication info
"servers": [
{
"host": "123.456.78.90",
"username": "root",
// or pem file (ssh based authentication)
"pem": "~/.ssh/id_rsa",
"sshOptions": { "Port": 22 }
}
]