I am following this tutorial, https://www.digitalocean.com/community/tutorials/how-to-use-ansible-with-terraform-for-configuration-management, to learn Terraform and Ansible.
When I execute terraform apply, it throws an error:
digitalocean_droplet.web[2]: Provisioning with 'remote-exec'...
Error: Failed to parse ssh private key: ssh: this private key is passphrase protected
Error: Error creating droplet: POST https://api.digitalocean.com/v2/droplets: 422 Failed to resolve VPC
on droplets.tf line 1, in resource "digitalocean_droplet" "web":
1: resource "digitalocean_droplet" "web" {
This is the code:
provisioner "remote-exec" {
inline = ["sudo apt update", "sudo apt install python3 -y", "echo DONE!"]
connection {
host = self.ipv4_address
type = "ssh"
user = "root"
private_key = file(var.pvt_key)
}
}
That private SSH key (~/.ssh/id_rsa) on my machine is passphrase protected. How do I use it?
You can add the desired ssh key to the ssh-agent with ssh-add ~/.ssh/id_rsa and then set the agent field in connection stanza to:
connection {
host = self.ipv4_address
type = "ssh"
user = "root"
agent = true
}
Related
Using Terraform I have created 3 droplets on DigitalOcean. While doing it, in folder I have been writing SSH key and creating inventory.txt file.
Here is how it look in Terraform code:
resource "local_file" "servers_ipv4" {
content = join("\n", [
for idx, s in module.openvpn_do_infrastructure_module.servers_ipv4:
<<EOT
${var.droplet_names[idx]} ansible_host=${s} ansible_user=root ansible_ssh_private_key=openvpn_do_ssh.key
EOT
])
filename = "${path.module}/ansible/inventory.txt"
}
resource "local_file" "ssh_keys" {
content = module.openvpn_do_infrastructure_module.ssh_keys
filename = "${path.module}/ansible/openvpn_do_ssh.key"
}
Then, I have ansible folder. After execution of the script and creating droplets in this folder I have 3 files. First file, is just ansible.cfg:
[defaults]
host_key_checking = false
inventory = ./inventory.txt
The other 2 are created by Terraform. It's SSH key - openvpn_do_ssh.key and inventory.txt:
certificate-authority-server ansible_host=123.123.123.121 ansible_user=root ansible_ssh_private_key=openvpn_do_ssh.key
openvpn-server ansible_host=123.123.123.122 ansible_user=root ansible_ssh_private_key=openvpn_do_ssh.key
nextcloud-server ansible_host=123.123.123.123 ansible_user=root ansible_ssh_private_key=openvpn_do_ssh.key
And here is the problem. When I do ansible all -m ping, I get errors:
certificate-authority-server | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: root#123.123.123.121: Permission denied (publickey).",
"unreachable": true
}
nextcloud-server | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: root#123.123.123.122: Permission denied (publickey).",
"unreachable": true
}
openvpn-server | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: root#123.123.123.123: Permission denied (publickey).",
"unreachable": true
}
Also, I can connect to those droplets with SSH and everything is just fine. Even when I change permission to .key file, I still have the same error. I was trying to get more logs with -vvv flags, and here is the most interesting info I found:
ESTABLISH SSH CONNECTION FOR USER: root
...
<123.123.123.121> (255, b'', b"Warning: Permanently added '123.123.123.121' (ED25519) to the list of known hosts.\r\nroot#123.123.123.121: Permission denied (publickey).\r\n")
<123.123.123.121> (255, b'', b'root#123.123.123.121: Permission denied (publickey).\r\n')
I have solved this problem. This is what has helped me:
First of all, I have changed the extension of SSH key file from .key to .pem.
To ansible.cfg I have added next line:
[defaults]
host_key_checking = false
inventory = ./inventory.txt
inventory = ./inventory.txt
private_key_file = ./openvpn_do_ssh.pem
The last thing I have done, is adding read-only file_permission for SSH key.
resource "local_file" "ssh_keys" {
content = module.openvpn_do_infrastructure_module.ssh_keys
filename = "${path.module}/ansible/openvpn_do_ssh.pem"
content = module.openvpn_do_infrastructure_module.ssh_keys
filename = "${path.module}/ansible/openvpn_do_ssh.pem"
file_permission = "0400"
}
Hope it can help someone...
I'm trying to use remote-exec provisioner for a use-case related to my project on GCP using Terraform version 12, based on the format specified in terraform docs I get a known hosts key mismatch error after the provisioner timesout.
resource "google_compute_instance" "secondvm" {
name = "secondvm"
machine_type = "n1-standard-1"
zone = "us-central1-a"
boot_disk {
initialize_params {
image = "centos-7-v20190905"
}
}
network_interface {
network = "default"
access_config {
nat_ip = google_compute_address.second.address
network_tier = "PREMIUM"
}
}
#metadata = {
#ssh-keys = "root:${file("~/.ssh/id_rsa.pub")}"
#}
metadata_startup_script = "cd /; touch makefile.txt; sudo echo \"string xyz bgv\" >>./makefile.txt"
provisioner "remote-exec" {
inline = [
"sudo sed -i 's/xyz/google_compute_address.first.address/gI' /makefile.txt"
]
connection {
type = "ssh"
#port = 22
host = self.network_interface[0].access_config[0].nat_ip
user = "root"
timeout = "120s"
#agent = false
private_key = file("~/.ssh/id_rsa")
#host_key = file("~/.ssh/google_compute_engine.pub")
host_key = file("~/.ssh/id_rsa.pub")
}
}
depends_on = [google_compute_address.second]
}
I'm not sure what exactly I'm doing wrong with the keys here but the error I get is
google_compute_instance.secondvm: Still creating... [2m10s elapsed]
google_compute_instance.secondvm (remote-exec): Connecting to remote host via SSH...
google_compute_instance.secondvm (remote-exec): Host: 104.155.186.128
google_compute_instance.secondvm (remote-exec): User: root
google_compute_instance.secondvm (remote-exec): Password: false
google_compute_instance.secondvm (remote-exec): Private key: true
google_compute_instance.secondvm (remote-exec): Certificate: false
google_compute_instance.secondvm (remote-exec): SSH Agent: false
google_compute_instance.secondvm (remote-exec): Checking Host Key: true
google_compute_instance.secondvm: Still creating... [2m20s elapsed]
Error: timeout - last error: SSH authentication failed (root#104.155.186.128:22): ssh: handshake failed: knownhosts: key mismatch
I have setup a Windows server and installed ssh using Chocolatey. If I run this manually I have no problems connecting and running my commands. When I try to use Terraform to run my commands it connects successfully but doesn't run any commands.
I started by using winrm and then I could run commands but due to some problem with creating a service fabric cluster over winrm I decided to try using ssh instead and when running things manually it worked and the cluster went up. So that seems to be the way forward.
I have setup a Linux VM and got ssh working by using the private key. So I have tried to use the same config as I did with the Linux VM on the Windows but it still asked me to use my password.
What could the reason be for being able to run commands over ssh manually and using Terraform only connect but no commands are run? I am running this on OpenStack with Windows 2016
null_resource.sf_cluster_install (remote-exec): Connecting to remote host via SSH...
null_resource.sf_cluster_install (remote-exec): Host: 1.1.1.1
null_resource.sf_cluster_install (remote-exec): User: Administrator
null_resource.sf_cluster_install (remote-exec): Password: true
null_resource.sf_cluster_install (remote-exec): Private key: false
null_resource.sf_cluster_install (remote-exec): SSH Agent: false
null_resource.sf_cluster_install (remote-exec): Checking Host Key: false
null_resource.sf_cluster_install (remote-exec): Connected!
null_resource.sf_cluster_install: Creation complete after 4s (ID: 5017581117349235118)
Here is the script im using to run the commands:
resource "null_resource" "sf_cluster_install" {
# count = "${local.sf_count}"
depends_on = ["null_resource.copy_sf_package"]
# Changes to any instance of the cluster requires re-provisioning
triggers = {
cluster_instance_ids = "${openstack_compute_instance_v2.sf_servers.0.id}"
}
connection = {
type = "ssh"
host = "${openstack_networking_floatingip_v2.sf_floatIP.0.address}"
user = "Administrator"
# private_key = "${file("~/.ssh/id_rsa")}"
password = "${var.admin_pass}"
}
provisioner "remote-exec" {
inline = [
"echo hello",
"powershell.exe Write-Host hello",
"powershell.exe New-Item C:/tmp/hello.txt -type file"
]
}
}
Put the connection block inside the provisioner block:
provisioner "remote-exec" {
connection = {
type = "ssh"
...
}
inline = [
"echo hello",
"powershell.exe Write-Host hello",
"powershell.exe New-Item C:/tmp/hello.txt -type file"
]
}
See https://github.com/arunoda/meteor-up/issues/171
I am trying to deploy my meteor app from my nitrous box to a remote server in Linode.
I follow the instruction in meteor up and got
Invalid mup.json file: Server username does not exit
mup.json
// Server authentication info
"servers": [
{
"host": "123.456.78.90",
// "username": "root",
// or pem file (ssh based authentication)
"pem": "~/.ssh/id_rsa",
"sshOptions": { "Port": 1024 }
}
]
So I uncomment the username: "roote line in mup.json and I did mup logs -n 300 and got the following error:
[123.456.78.90] ssh: connect to host 123.456.78.90 port 1024: Connection refused
I suspect I may did something wrong in setting up the SSH key. I can access my remote server without password after setting up my ssh key in ~/.ssh/authorized_keys.
The content of the authorized_keys looks like this:
ssh-rsa XXXXXXXXXX..XXXX== root#apne1.nitrousbox.com
Do you guys have any ideas of what went wrong?
Problem solved by uncommenting the username and changing the port to 22:
// Server authentication info
"servers": [
{
"host": "123.456.78.90",
"username": "root",
// or pem file (ssh based authentication)
"pem": "~/.ssh/id_rsa",
"sshOptions": { "Port": 22 }
}
]
I am fairly new to Vagrant and Docker both.
What I am trying to do here is to get a container provided via docker in Vagrant and install a small webapp using the shell provisioner.
Here is my Vagrantfile
Vagrant.configure(2) do |config|
# config.vm.provision :shell, path: "bootstrap.sh"
config.vm.provision :shell, inline: 'echo Hi there !!!'
config.vm.provider :docker do |d|
d.name="appEnvironment"
d.image = "phusion/baseimage"
d.remains_running = true
d.has_ssh = true
d.cmd = ["/sbin/my_init","--enable-insecure-key"]
end
end
The problem that i am facing here is that after the container is created it keeps running the following and eventually just stops.
I can see a running docker container when i type in docker ps but it hasnt run the provisioning part. I am assuming it is because the ssh wasnt successful
==> default: Creating the container...
default: Name: appEnvironment
default: Image: phusion/baseimage
default: Cmd: /sbin/my_init --enable-insecure-key
default: Volume: /home/devops/vagrantBoxForDemo:/vagrant
default: Port: 127.0.0.1:2222:22
default:
default: Container created: 56a87b7cd10c22fe
==> default: Starting container...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 172.17.0.50:22
default: SSH username: vagrant
default: SSH auth method: private key
default: Warning: Connection refused. Retrying...
default: Warning: Connection refused. Retrying...
default: Warning: Connection refused. Retrying...
Can someone let me know where i might be wrong? I tried changing the image as well but without success.
First download the insecure key provided by phusion from:
https://github.com/phusion/baseimage-docker/blob/master/image/insecure_key
* Remember the insecure key should be used only for development purposes.
Now, you need to enable ssh by adding the following into your Dockerfile:
FROM phusion/baseimage
RUN rm -f /etc/service/sshd/down
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh
RUN /usr/sbin/enable_insecure_key
Enable ssh and specify the key file in your Vagrantfile:
Vagrant.configure("2") do |config|
config.vm.define "app" do |app|
app.vm.provider "docker" do |d|
d.build_dir = "."
d.cmd = ["/sbin/my_init", "--enable-insecure-key"]
d.has_ssh = true
end
end
config.ssh.username = "root"
config.ssh.private_key_path = "path/to/your/insecure_key"
end
Up your environment
vagrant up
Now you should be able to access your container by ssh
vagrant ssh app
phusion/baseimage does not have the insecure private key enabled by default. You have to create your own base image FROM phusion/baseimage with the following:
RUN /usr/sbin/enable_insecure_key