ssh-add does not work inside docker image - ssh

We are connecting to a remote server via SFTP. The connection requires SSH keys and a passphrase. To eliminate the passphrase prompt when SFTP is being executed, we are using ssh-add via script ,to add the pass phrase.
eval $(ssh-agent)
pass=$(cat $2)
expect << EOF
spawn ssh-add $1
expect "Enter passphrase"
send "$pass\r"
expect eof
EOF
This executes fine in docker and I get the 'Identity added' message.
However when SFTP is run subsequently it asks for the passphrase.
What could be wrong?

Related

Interact with an allocated tty in ansible to login by CyberArk

I am attempting to use ansible in order to automatize some workflow in Linux Servers. However, I am forced to do the login using CyberArk.
In a normal ssh connection session, you can connect with your credentials and once you are logged, you are prompted to write the reason of the login.
When using ansible, I add my credentials and after using the debug parameters I find out this message:
'PSPSD072E Perform session error occurred. Reason: You are required to specify more information for this operation and no terminal was allocated. Use the [-t] option to force terminal allocation, or connect with SSH through PSMP to the target and then run the command.. (Codes: -1, -1)\n'
Next step, I edit ansible.cfg and I add the -tt parameter for the ssh conecction.
[ssh_connection]
ssh_args = -tt -C -o ControlMaster=auto -o ControlPersist=60s
However, when I run ansible right now, terminal is allocated and I can write text in an allocated terminal but I don't know how to close it. I mean, I am not aware how to submit a text and close this terminal and continue with the run of the playbook.
For example:
ansible-playbook -i inventory.yml playbook.yml --ask-pass -vvvv
SSH password: #here I write my credentials
PLAYBOOK: playbook.yml
TASK [command] #Task of the playbook
(Here I can start to write, but how to submit the text I just wrote?) #Terminal allocated
I tried to press to enter or control+c, but it does not work.
So basically, my question is, once the terminal is allocated in Ansible, how can I submit text and keep going the rest of the playbook?
Thanks.

Expect script not working and terminal closes immediately

I don't know what's wrong with the script. I set up a new profile on Iterm terminal to run the script, but it never works and closes immediately. Here's the script:
#!/usr/bin/expect -f
set timeout 120
set secret mysecret
set username asdf
set host {123.456.789.010}
set password password123
log_user 0
spawn oathtool --totp --base32 $secret
expect -re \\d+
sleep 400
set otp $expect_out(0,string)
spawn ssh -2 $username#$host
expect "*assword:*"
send "$password\n"
expect "Enter Google Authenticator code:"
send "$otp\n"
interact
First, test you ssh connection with:
ssh -v <auser>#<apassword>
That will validate the SSH session works.
Make sure to not use ssh -T ..., since you might need a terminal for expect commands to work.
Second, add at least an echo at the beginning of the script, to see if it is called:
puts "Script running\r"
Third, see if a bash script, with part of it using expect as in here, would work better in this case

How to check SSH credentials are working or not

I have a large number of devices around 300
I have different creds to them
SSH CREDS, API CREDS
So as I cannot manually SSH to all those devices and check the creds are working or not
I am thinking of writing a script and pass the device IP's to the script and which gives me as yes as a result if the SSH creds are working and NO if not working.
I am new to all this stuff! details will be appreciated!
I will run this script on a server from where I can ssh to all the devices.
Your question isn't clear as to what sort of credentials you use for connecting to each host: do all hosts have the same connection method, for instance?
Let's assume that you use ssh's authorised keys method to log in to each host (i.e. you have a public key on each host within the ~/.ssh/authorized_keys file). You can run ssh with a do nothing command against each host and look at the exit code to see if the connection was successful.
HOST=1.2.3.4
ssh -i /path/to/my/private.key user#${HOST} true > /dev/null 2>&1
if [ $? -ne 0]; then echo "Error, could not connect to ${HOST}"; fi
Now it's just a case of wrapping this in some form of loop where you cycle through each host (and choose the right key for each host, perhaps you could name each private key after the name or IP address of the target host). The script will go out all those hosts for which a connection was not possible. Note that this script assumes that true is available on the target host, otherwise you could use ls or similar. We pipe all output to /dev/null/ as we're only interested in the ability to connect.
EDIT IN RESPONSE TO OP CLARIFICATION:
I'd strongly recommend not using username/password for login, as the username and password will likely be held in your script somewhere, or even in your shell history, if you run the command from the command line. If you must do this, then you could use expect or sshpass, as detailed here: https://srvfail.com/how-to-provide-ssh-password-inside-a-script-or-oneliner/
The ssh command shown does not spawn a shell, it literally logs in to the remote server, executes the command true (or ls, etc), then exits. You can use the return code ($? in bash) to check whether the command executed correctly. My example shows it printing out an error message for non-zero return codes, but to print out YES on successful connection, you could do this:
if [ $? -eq 0]; then echo "${HOST}: YES"; fi

How does ssh read the password that I type in?

My understanding is that if I type the following command in my xterm:
$ ssh ir#localhost
Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
ir#localhost's password:
Then the stdin and stdout of the ssh process are both connected to the pty. So when I type in the password, ssh just reads it from stdin.
But my mental model fails to explain this:
$ yes | ssh ir#localhost
Pseudo-terminal will not be allocated because stdin is not a terminal.
Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
ir#localhost's password:
...
...
zsh: command not found: y
...
Here, yes's stdin is connected to the pty, and yes's stdout is piped to ssh's stdin. So ssh should be getting a deluge of ys, but it is smart enough to tell that its stdin is not a tty, and that the contents of stdin should not be interpreted as the password. Instead, the ys are buffered, and once the login succeeds, they are delivered directly to the bash process on the remote end.
But then how is ssh able to get the password that I am typing in? The pty sends my password to yes, which drops it on the floor.
Also, ssh's claim to not allocate a pty appears to be a lie. The following snippet prints out whether or not stdin is tty:
$ [ -t 0 ] && echo true || echo false;
true
When I pipe this command to ssh, it initially prints out "false", as expected:
$ echo "[ -t 0 ] && echo true || echo false;" | ssh ir#localhost
Pseudo-terminal will not be allocated because stdin is not a terminal.
Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
ir#localhost's password:
...
false
$ [ -t 0 ] && echo true || echo false;
true
But when I run the same command on the remote shell, it prints out "true". I can even open up vim, and when I resize my local terminal, the vim resizes the text that it is displaying appropriately. This can only be possible if the ssh client sends information about the resizing over the wire, and if sshd notifies the vim process, just like a pty would.
Interestingly, when I hit Ctrl+C, the ssh session is immediately terminated. My explanation for this is that the pty intercepts the Ctrl+C and sends a SIGINT to both yes and ssh. If ssh had allocated a pty, it would intercept the signal and transmit it over the wire to the remote host, and whatever process running remotely would be the one interrupted. But since ssh did not allocate a pty, it simply died. So this part is expected...but I still don't understand why [ -t 0 ] passes on the remote shell, and how ssh is able to read the password even when yes is piped to it.

ssh-add when ssh-agent fails

I am trying to write a script that makes use of {ssh,gpg}-agents effortless (like keychain, but I discovered it too late). This script will be run in a sensitive environment, so I set a timeout in order to remove the keys from the agent after some time.
I managed to write the spawn/reuse part but now, I want ssh-add to be called automatically when the user is opening a ssh connection if the agent has no proper key.
Is there any way to make ssh-agent call ssh-add on failure or something better ?
What I am doing (assuming the key has a distinctive name)
I have a script in ~/bin/ (which is in my PATH)
!/bin/bash
if ! ssh-add -l | grep -q nameOfmyKey
then
ssh-add -t 2h ~/path-to-mykeys/nameOfmyKey.key
fi
ssh myuser#myserver.example.net
ssh -l lists all keys currently active in the agent.
The parameter -t ensures that the key is enabled for a restricted time only.