How to run an ansible-playbook with a passphrase-protected-ssh-private-key? - ssh

I have created an autoscaling group for Amazon EC2 and I have added my public key when I created the AMI with packer, I can run ansible-playbook and ssh to the hosts.
But there is a problem when I run the playbook like this
ansible-playbook load.yml I am getting this message that I need to write my password
Enter passphrase for key '/Users/XXX/.ssh/id_rsa':
Enter passphrase
for key '/Users/XXX/.ssh/id_rsa':
Enter passphrase for key
'/Users/XXX/.ssh/id_rsa':
The problem is it doesn't accept my password (I am sure I am typing my password correctly).
I found that I can send my password with ask-pass flag, so I have changed my command to ansible-playbook load.yml --ask-pass and I got some progress but again for some other task it asks for the password again and it didn't accept my password
[WARNING]: Unable to parse /etc/ansible/hosts as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] *************************************************************************************************************
TASK [ec2_instance_facts] ****************************************************************************************************
ok: [localhost]
TASK [add_host] **************************************************************************************************************
changed: [localhost] => (item=xx.xxx.xx.xxx)
changed: [localhost] => (item=yy.yyy.yyy.yyy)
PLAY [instances] *************************************************************************************************************
TASK [Copy gatling.conf] *****************************************************************************************************
ok: [xx.xxx.xx.xxx]
ok: [yy.yyy.yyy.yyy]
Enter passphrase for key '/Users/ccc/.ssh/id_rsa': Enter passphrase for key '/Users/ccc/.ssh/id_rsa':
Enter passphrase for key '/Users/ccc/.ssh/id_rsa':
Enter passphrase for key '/Users/ccc/.ssh/id_rsa':
Enter passphrase for key '/Users/ccc/.ssh/id_rsa':
If I don't use ask-pass flag even the task [Copy gatling.conf] doesn't complete and complaining about could not access the hosts. By adding the flag this part passes, but my next task again asks for pass.
How should I solve this issue? What am I doing wrong here?

In ansible There is no option to store passphrase-protected private key
For that we need to add the passphrase-protected private key in the ssh-agent
Start the ssh-agent in the background.
# eval "$(ssh-agent -s)"
Add SSH private key to the ssh-agent
# ssh-add ~/.ssh/id_rsa
Now try running ansible-playbook and ssh to the hosts.

I solved it by running ssh-add once and use it like if it's not password protected.

Building up on #javeed-shakeel's answer, I added the following lines to my .bashrc:
command -v ansible > /dev/null &&
alias ansible='ssh-add -l > /dev/null || ssh-add 2> /dev/null && ansible'
command -v ansible-playbook > /dev/null &&
alias ansible-playbook='ssh-add -l > /dev/null || ssh-add 2> /dev/null && ansible-playbook'
This will run ssh-add before ansible(-playbook) iff there was no key added to the ssh-agent, yet. This has the advantage that one does not need to run ssh-add by hand and one will be asked for the passphrase only if it is necessary.

A passphrase is something else than a password. When ssh asks for a password, then you login as a user with a password that is checked by the computer you ssh into. It might be a company-wide password (AD, LDAP, etc.), or a local admin password.
A passphrase is used to encrypt the private key of your ssh key-pair. Typically the key-pair is generated with ssh-keygen, this command asks for a passphrase twice when you create your key-pair. That is for security, because when there is no passphrase on the private key, then anyone with a copy of the file can impersonate you! The passphrase is an added factor to the security.
ssh-agent is a program that creates a session for the use of your key-pair.
If you want to automate in GitLab with the use of an encrypted ssh key-pair this might help:
##
## Install ssh-agent if not already installed.
## (change apt-get to yum if you use an RPM-based image)
##
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
##
## Run ssh-agent (on the Ansible control host)
##
- eval $(ssh-agent)
##
## Create the SSH directory and give it the right permissions
##
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
## Create a shell script that will echo the environment variable SSH_PASSPHRASE
- echo 'echo $SSH_PASSPHRASE' > ~/.ssh/tmp && chmod 700 ~/.ssh/tmp
## If ssh-add needs a passphrase, it will read the passphrase from the current
## terminal if it was run from a terminal. If ssh-add does not have a terminal
## associated with it but DISPLAY and SSH_ASKPASS are set, it will execute the
## program specified by SSH_ASKPASS and open an X11 window to read the
## passphrase. This is particularly useful when calling ssh-add from a
## .xsession or related script. Setting DISPLAY=None drops the use of X11.
## Add the SSH key stored in SSH_PRIVATE_KEY variable to the agent store
## We're using tr to fix line endings which makes ed25519 keys work
## without extra base64 encoding.
## https://gitlab.com/gitlab-examples/ssh-private-key/issues/1#note_48526556
##
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | DISPLAY=None SSH_ASKPASS=~/.ssh/tmp ssh-add -
##
## Use ssh-keyscan to scan the keys of your private server. Replace example.com
## with your own domain name. You can copy and repeat that command if you have
## more than one server to connect to.
##
- ssh-keyscan example.com >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts

Related

Ansible SSH authentication with password and private key

For security reasons and compliance, we're required to set up 2FA on our hosts. We implement it by forcing authentication with passwords AND a public key with the AuthenticationMethods setting in sshd_config. The private key is required to have a password as well.
So in order to run playbooks on these hosts, we need to be able to enter the login password and the password of the private key. I've used the -k flag together with the ansible_ssh_private_key_file option in the hosts file (or with the --private-key flag). It asks for the SSH login password but then it just hangs and never asks me for the private key passphrase. When I set the -vvvv flat I see that the key is passed correctly, but the SSH login password isn't passed with the command:
<10.1.2.2> SSH: EXEC sshpass -d10 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=22022 -o 'IdentityFile="/home/me/.ssh/id_ed25519"' -o 'User="me"' -o ConnectTimeout=10 -o ControlPath=/home/me/.ansible/cp/db574551ae 10.1.2.2 '/bin/sh -c '"'"'echo ~me && sleep 0'"'"''
How can I make Ansible work with both passwords and public keys?
As stated in the Ansible Documentation:
Ansible does not expose a channel to allow communication between the user and the
ssh process to accept a password manually to decrypt an ssh key when using the ssh
connection plugin (which is the default). The use of ssh-agent is highly recommended.
This is why you don't get prompted to type in your private key password. As said in the comments, setup a ssh agent, when you'll be prompted for it:
$ ssh-agent bash
$ ssh-add ~/.ssh/id_rsa
Then, after playbook execution, clear out identities so to be asked for passwords the next time:
# Deletes all identities from the agent:
ssh-add -D
# or, instead of adding identities, removes identities (selectively) from the agent:
ssh-add -d <file>
You may pack key addition, playbook execution and cleaning into one wrapper shell script.

How to fix Permission denied (publickey,password) in gitlab Ci/CD?

I have below simple script in .gitlab-cl.yml file:
build_deploy_stage:
stage: build
environment: Staging
only:
- master
script:
- mkdir -p ~/.ssh
- echo "$PRIVATE_KEY" >> ~/.ssh/id_dsa
- cat ~/.ssh/id_dsa
- chmod 600 ~/.ssh/id_dsa
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
- cat ~/.ssh/config
- scp myfile.js user#example.com:~
But I get this error when job is run, executing the last line (scp command):
Warning: Permanently added 'example.com' (ECDSA) to the list of known hosts.
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,password).
I spent whole day but could not fix it. I verified that $PRIVATE_KEY exists. I generated key pair while logged into example.com copying the generated private to PRIVATE_KEY variable on gitlab.
How to fix this problem?
Note that it is dsa key.
Check your permission for ~/.ssh (700) and all the files in them (600)
Your config file, for instance, might have default permissions that are too large.
If you can, activate a debug session in the sshd of the remote server: you will see if an dsa key is accepted (for recent version of sshd, that might be restricted). rsa would be better.
As seen here, OpenSSH 7.0 and higher no longer accept DSA keys by default.
The OP ace confirms in the comments:
I fixed the problem when I regenerated tsa key pairs instead of dsa keys

ssh-add when ssh-agent fails

I am trying to write a script that makes use of {ssh,gpg}-agents effortless (like keychain, but I discovered it too late). This script will be run in a sensitive environment, so I set a timeout in order to remove the keys from the agent after some time.
I managed to write the spawn/reuse part but now, I want ssh-add to be called automatically when the user is opening a ssh connection if the agent has no proper key.
Is there any way to make ssh-agent call ssh-add on failure or something better ?
What I am doing (assuming the key has a distinctive name)
I have a script in ~/bin/ (which is in my PATH)
!/bin/bash
if ! ssh-add -l | grep -q nameOfmyKey
then
ssh-add -t 2h ~/path-to-mykeys/nameOfmyKey.key
fi
ssh myuser#myserver.example.net
ssh -l lists all keys currently active in the agent.
The parameter -t ensures that the key is enabled for a restricted time only.

Google cloud engine, ssh between two centos7 instances fails

I've setup 2 Google Compute Engine instances and I can easily SSH in both of them by using the key created by gcloud compute ssh command. But when I try the following...
myself#try-master ~] ssh-keygen -q -t rsa -N "" -f ~/.ssh/id_rsa
myself#try-master ~] cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
myself#try-master ~] chmod 0600 ~/.ssh/authorized_keys
myself#try-master ~] ssh-copy-id -i ~/.ssh/id_rsa.pub myself#try-slave-1
... it does not work, and ssh-copy-id shows the message below:
Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
If I copy the google_compute_engine private and public key on try-master, and can use it to log on both instances, but I find unsatisfactory to move a private key over the network. I guess this is somewhat related to this topic:
How can this be solved?
[1] https://cloud.google.com/compute/docs/instances#sshbetweeninstances
Using CentOS7 images, and a CentOs7 as local host:
gcloud compute instances create try-master --image centos-7
gcloud compute instances create try-slave-1 --image centos-7
This can be solved by using authentication forwarding during initial SSH keys setup:
Set up authentication forwarding for once on local machine (note the "-A" flag). First you need to run:
eval `ssh-agent -s`
And then
ssh-add ~/.ssh/google_compute_engine
gcloud compute ssh --ssh-flag="-A" try-master
Perform the steps above (from keygen to ssh-copy-id)
myself#try-master ~] ssh-keygen -q -t rsa -N "" -f ~/.ssh/id_rsa
myself#try-master ~] cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
myself#try-master ~] chmod 0600 ~/.ssh/authorized_keys
myself#try-master ~] ssh-copy-id -i ~/.ssh/id_rsa.pub myself#try-slave-1
myself#try-master ~] exit
Login again into try-master without SSH authentication forwarding:
gcloud compute ssh try-master
myself#try-master ~] ssh myself#try-slave-1
myself#try-slave-1 ~]
Initial approach didn't work because GCE instances only allow public key authentication by default. So, ssh-copy-id is unable to authenticate against try-slave to copy the new public key, because there is no public key configured in try-master available in try-slave yet.
Using authentication forwarding, the private key from your local machine is forwarded from your local machine to try-master, and from there to try-slave. GCE account manager in try-slave will fetch the public key from your project metadata and thus ssh-copy-id will be able to copy work.

How do I setup passwordless ssh on AWS

How do I setup passwordless ssh between nodes on AWS cluster
Following steps to setup password less authentication are tested thoroughly for Centos and Ubuntu.
Assumptions:
You already have access to your EC2 machine. May be using the pem key or you have credentials for a unix user which has root permissions.
You have already setup RSA keys on you local machine. Private key and public key are available at "~/.ssh/id_rsa" and "~/.ssh/id_rsa.pub" respectively.
Steps:
Login to you EC2 machine as a root user.
Create a new user
useradd -m <yourname>
sudo su <yourname>
cd
mkdir -p ~/.ssh
touch ~/.ssh/authorized_keys
Append contents of file ~/.ssh/id_rsa.pub on you local machine to ~/.ssh/authorized_keys on EC2 machine.
chmod -R 700 ~/.ssh
chmod 600 ~/.ssh/*
Make sure sshing is permitted by the machine. In file /etc/ssh/sshd_config, make sure that line containing "PasswordAuthentication yes" is uncommented. Restart sshd service if you make any change in this file:
service sshd restart # On Centos
service ssh restart # On Ubuntu
Your passwordless login should work now. Try following on your local machine:
ssh -A <yourname>#ec2-xx-xx-xxx-xxx.ap-southeast-1.compute.amazonaws.com
Making yourself a super user. Open /etc/sudoers. Make sure following two lines are uncommented:
## Allows people in group wheel to run all commands
%wheel ALL=(ALL) ALL
## Same thing without a password
%wheel ALL=(ALL) NOPASSWD: ALL
Add yourself to wheel group.
usermod -aG wheel <yourname>
This may help someone
Copy the pem file on the machine then copy the content of pem file to the .ssh/id_rsa file you can use bellow command or your own
cat my.pem > ~/.ssh/id_rsa
try ssh localhost it should work and same with the other machines in the cluster
how I made Paswordless shh work between two instances is the following:
create ec2 instances – they should be in the same subnet and have the same security group
Open ports between them – make sure instances can communicate to each other. Use the default security group which has one rule relevant for this case:
Type: All Traffic
Source: Custom – id of the security group
Log in to the instance you want to connect from to the other instance
Run:
1 ssh-keygen -t rsa -N "" -f /home/ubuntu/.ssh/id_rsa
to generate a new rsa key.
Copy your private AWS key as ~/.ssh/my.key (or whatever name you want to use)
Make sure you change the permission to 600
1 chmod 600 .ssh/my.key
Copy the public key to the instance you wish to connect to passwordless
1 cat ~/.ssh/id_rsa.pub | ssh -i ~/.ssh/my.key ubuntu#10.0.0.X "cat >> ~/.ssh/authorized_keys"
If you test the passwordless ssh to the other machine, it should work.
1 ssh 10.0.0.X
you can use ssh keys like described here:
http://pkeck.myweb.uga.edu/ssh/