Does terraform support ssh password protected key? - ssh-keys

I'm trying to use this config in here:
connection {
type = "ssh"
user = "root"
agent = true
private_key = "${file("~/.ssh/id_rsa")}"
}
I've got error:
password protected keys are not supported. Please decrypt the key prior to use.
I've also tried to remove private_key parameter. It just have to read keys from ssh-agent, but it doesn't work.
Terraform version is 0.9.2

Okay. Solved. Problem was that ssh_keys fingerprints are not specified, so vm been created without any ssh key assigned. But error itself is very misleading.
So just add this:
resource "digitalocean_droplet" "mydroplet" {
ssh_keys = [
"<fingerprint can be found in digital ocean ssh keys tab>"
]
}

Related

Junos not accepting key authentication

I have a problem with ssh rsa key authentication. I'm using an SRX100H2 that is running JUNOS 12.1X46-D10.2. I generated a private/public key pair without password on an ubuntu host. I copied the public key in /var/tmp using scp on the SRX100H2 and committed the following changes:
user salvador {
uid 2001;
class super-user;
authentication {
ssh-rsa "ssh-rsa AAAA..."
}
}
I loaded the key with load-key-file command. The problem is I can't get authenticated. It's asking for the password every time, although the key has no password. I'm trying to use this account to run some script with fail2ban. To connect, I'm using:
ssh -i .ssh/name_of_the_key -l salvador x.x.x.x
I even went as far as modifying /var/etc/sshd_config on the SRX and added LogLevel DEBUG3, but for some reason it doesn't log so much information as a regular sshd linux daemon.
Is there a bug for this firmware version JUNOS 12.1X46-D10.2, or am I doing something wrong?
Thank you for your help.
I found a workaround. There is a python library named Junos PyEZ:
https://github.com/Juniper/py-junos-eznc
It can connect directly to a junos device using a username and password, without the need of an ssh key, and it can perform various configurations on the device. I managed to create a script that just adds an address in address book and adds that address in an address set that is applied to a security policy. This way the attacking host is banned and cannot access the exposed resources. The script is being run by fail2ban each time it is needed.

rundeck SSH Authentication failure

I run Rundeck v4.1.2, using docker-compose.
I have created a test key pair. I have entered the private key into key storage under the path keys/test using the GUI, and configured the target node to require it for SSH access. I have added the public key under /home/rundeck/.ssh/authorized_keys on the target node.
The resources.xml file looks like this:
server18:
nodename: server18
hostname: server18.rc-group.local
osVersion: 18.04
osFamily: unix
osArch: amd64
description: target-test
osName: Ubuntu
username: rundeck
ssh-authentication: privateKey
ssh-privateKey-storage-path: keys/test
When I try to connect using command line SSH and the same private key, it works fine. So the key is fine, and the target node config is fine.
When, in the GUI, I try to run the "hostname" command on the same target node, I get:
Failed: AuthenticationFailure: Authentication failure connecting to node: "server18". Make sure your resource definitions and credentials are up to date.
Can someone spot what I'm missing?
Use ssh-key-storage-path attribute instead of ssh-privateKey-storage-path in your node definition, you can see the valid attributes here.

Copying Your Public Key Using ssh-copy-id

I am trying to configure a SSH Key-Based Authentication and after i created one, i want to copy the SSH Public Key to my server. When i give the following command on git bash : ssh-copy-id username#remote_host , i am asked for a password.
remote_host must be the floating_ip of the VM that i am trying to connect to ?
Which password should i type in ?
It would be really helpful if you could answer my questions.
On the first SSH operation (here an ssh-copy-id), you would need the password of the remote account, in order for the command to add your public kay to said remote user account ~/.ssh/authorized_keys.
Only then subsequent SSH commands would work without asking for password (only passphrase, if your private key is passphrase-protected, and if you have not added the private key to an ssh agent, for caching said passphrase).

Trying to create eks cluster using eksctl with ssh-access

While creating eks cluster using EKSCTL it is throwing error like error decoding SSH public key
Permission of pem file is 400
Command i am executing
eksctl create cluster --name=thirdekscluster --ssh-access --ssh-public-key=mysshkey.pem --nodegroup-name=standard-workers --node-type=t3.medium --nodes=3 --nodes-min=1 --nodes-max=4 --node-ami=auto
Error:
[ℹ] using region ap-south-1
[ℹ] setting availability zones to [ap-south-1a ap-south-1c ap-south-1b]
[ℹ] subnets for ap-south-1a - public:xxxxx/19 private:xxxx/19
[ℹ] subnets for ap-south-1c - public:xxxxx/19 private:xxxx/19
[ℹ] subnets for ap-south-1b - public:xxxxx/19 private:xxxx/19
[ℹ] nodegroup "standard-workers" will use "ami-01b6a163133c31994" [AmazonLinux2/1.12]
[✖] computing fingerprint for key "mysshkey.pem":
error decoding SSH public key:
"-----BEGIN RSA PRIVATE KEY
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
-----END RSA PRIVATE KEY-----"
err: illegal base64 data at input byte 0
I had the same ploblem, in my case I was trying to use the private key instead of public. (key was created directly on aws ec2 console panel)
Solution:
ssh-keygen -y -f .pem >> <> .pem (just an ilustrative name).
Thanks for your response, but i sorted out myself.
created cluster using - eksctl create cluster --name=thirdekscluster --ssh-access=true --ssh-public-key=sreeeks --nodegroup-name=standard-workers --node-type=t3.medium --nodes=3 --nodes-min=1 --nodes-max=4 --node-ami=auto
The SSH part in the command should be like --ssh-access=true
I used Bitvise Client Key Management to export in OpenSSH format. After that, eksctl worked!
For me it worked as I removed the BOM by copying the public key to a different txt file.
eksctl create cluster --ssh-access --ssh-public-key=~/.ssh/id_rsa.pub --nodegroup-name=standard-workers --node-type=t3.medium --nodes=2 --nodes-min=1 --nodes-max=2
This should definitely work.

Spring Cloud Config - Cannot clone or checkout repository: ssh://git#github.com/<user>/repo.git

When I try the URL with https it works but I don't want to have the username and password. I am able to perform a git clone with the above url and its working. But when I try it in the code and hit the localhost:8888/default endpoint I get the error:
{
"error": "Not Found",
"message": "Cannot clone or checkout repository: ssh://git#github.com/<user>/config-repo.git",
"path": "/licensingservice/default",
"status": 404,
"timestamp": "2018-04-30T23:32:54.726+0000"
}
Here is my application.yml entry
server:
port: 8888
spring:
cloud:
config:
server:
git:
uri: ssh://git#github.com/<user>/config-repo.git
searchPaths: licensingservice
I am using spring cloud config - Finchley.Not. No sure what I am missing. Please suggest.
Few things to inspect:
Do you have github in your known_hosts? If not, then it will prompt you to enter password even you have key pairs that might cause the error.
Do you have SSH keys under /home/{you}/.ssh/ folder?
When you generate your keys, did you use passphrase? If so, you need to include the passphrase key in your YAML file.
If all above are okay. Then download the spring-cloud-config-server, debug the org.springframework.cloud.config.server.environment.JGitEnvironmentRepository.class
Good luck.
You have to add your machine's ssh key to the repo.
Ssh key should be generate with the following command:
ssh-keygen -m PEM -t rsa -b 4096 -C "your_email#example.com"
For more info, you can check this link
https://skryvets.com/blog/2019/05/27/solved-issue-spring-cloud-config-server-private-ssh-key/
I also faced a similar problem with spring cloud config server. You need to add an additional property spring.cloud.config.server.git.skip-ssl-validation=true in application.properties file.