sshpass with proxycommand returns without running anything - ssh

In our environments, we have several servers in production. Every time I want to search for something, it may be in 1 of 4 different servers.
I am creating a script to automate this search, so that I directly know which server is involved.
I am connecting through a jumphost.
So far the following command works fine :
$ ssh -oProxyCommand="ssh -W %h:%p user#jumphost" user#server "ls"
Now, because I have to run this several times, I am searching for a way to only have to use the password once.
Both the jumphost and the server require the same password, and public keys are not an option (not allowed, I literally cannot do it).
I have been reading about sshpass for this and am trying this :
$ sshpass -p password ssh -oProxyCommand="ssh -W %h:%p user#jumphost" user#server "ls"
(I know -p is not safe and will use -e of -f as soon as I am successful with this step).
When I do this, I can login in both systems but the command returns before I see the result of ls.
I have tried to have the -t option to ssh without any success.
I have also tried the -J option from ssh, with the same results (command returns without returning any results).
$ sshpass -p password ssh -J user#jumphost user#server "ls"
Any suggestions?
EDIT:
Solution was to use sshpass twice :
$ sshpass -p password ssh -oProxyCommand="sshpass -p ssh -W %h:%p user#jumphost" user#server "ls"

Try running ssh in verbose mode:
ssh -vvv -oProxyCommand="ssh -W %h:%p user#jumphost" user#server "ls"
I'm sure it will show something of interest. A hook with which you can figure this out.

Related

sshpass, permission denied, please try again

I know that this question has been proposed several times (https://superuser.com/questions/606252/how-to-use-sshpass-for-chained-connection and https://unix.stackexchange.com/questions/320412/how-to-use-sshpass-to-supply-a-password-on-the-second-ssh-hop) but all the solutions that I've found until know are not working.
I'm tryng to access a third machine (third#machine) by using sshpass in order to not be prompted to insert a password. However,it is mandatory to use a bridge machine (bridge#machine) before entering the final one.
Each time I need to enter the passwords for the bridge#machine and for the third#machine, so my workflow is:
ssh bridge#machine
insert password:
ssh third#machine
insert password
Until now, I was able to avoid the first password by using sshpass in the proxycommand inside the ~.ssh/config file as follow:
vi ~.ssh/config :
Host *.reference
User example_user
ProxyCommand sshpass -p $bridge_machine_password$ ssh -o StrictHostKeyChecking=no bridge#machine "nc -w 60 `basename %h .reference` %p"
and contemporary I've define an alias named "curie" in the .bashrc file which is:
alias curie='ssh third#machine.reference'
So if run the alias curie I'm able to avoid the first password but I'm still prompted for the password of the third#machine.
For this reason I've tried to use sshpass to access the third#machine in the following manner:
>sshpass -p 'third_machine_password' ssh -oProxyCommand="ssh -W %h:%p bridge#machine" third#machine
Unfortunately, this gives back :
Permission denied, please try again.
Could be a restriction imposed by the third#machine or I'm doing something wrong?
if your password contains special characters such as $...
eg abcd#1234$$ then use \ with the special character....add this \ before each $$....it worked for me
Find a solution:
created firstly in the config file the proxy command
Host *.reference
User bridge
ProxyCommand sshpass -p passwd_bridge_machine2 ssh -o StrictHostKeyChecking=no bridge#machine2 "nc -w 60 `basename %h .ciment` %p"
after this command set in the config I created the alias in the .bashrc file:
alias curie='sshpass -p passw_third#machine3 ssh third#machine3.reference'
It is important to add the .reference line because it will firstly call the proxycommand in the config file and then use the sshpass in the alias. Once everything is settled it is only necessary to run the alias in the terminal to open the third machine withou any password.
Hope it helped someone else

How to: scp over Jumphost, each with privatekeys

I want to have an scp command over a Jumphost to the targetserver. Both, the Jumphost and the targetserver, require an key for the login.
If there would be no key required, I think this command would work:
scp -o ProxyJump=usernameJumpserver#ipJumpserver filename usernameTargetserver#ipTargetserver:/path/filename
So, including a key, I get to this command:
scp -i /pathOnMyClient/key -o ProxyJump=usernameJumpserver#ipJumpserver filename usernameTargetserver#ipTargetserver:/path/filename
Then I get the error "usernameTargetServer#ipTargetserver: Permission denied (publickey)."
I can't add the (probably?) required -i /pathJumpserver/key to it. How does it work?
as you cannot enter the password of your ssh key at the jumphost I suggest to load your key into your local ssh-agent and then use one of:
> scp -o ProxyJump=user#jump.host localfile user#target.host:
> scp -o ProxyJump=user#jump.host user#target.host:file localdir
this works for me!
HTH
Stefan K.
So we have:
LocalHost
JumpHost
DestinationHost
On LocalHost, in ~/.ssh/config add:
Host JumpHost
User JumpHostUser
IdentityFile ~/.ssh/id_rsa
# other optional settings:
# Port 2222
# HostName 192.168.0.1
Host DestinationHost
User DestinationHostUser
IdentityFile ~/.ssh/id_rsa_jumphost
And you can use what #StefanKaerst suggested:
scp -o ProxyJump=JumpHost DestinationHost:/file /LocalFile
scp -o ProxyJump=JumpHost /Localile DestinationHost:/File
I have it aliased as
scpj='scp -o ProxyJump=JumpHost'
So I only type:
scpj DestinationHost:/file /LocalFile
You need to have all the keys in place though, both from local to jump, from jump to destination and from local to destination.
I could not get this working with ProxyJump, so I fell back to the more verbose ProxyCommand instead. This works for me for copying from A to C through B:
scp -i <path on A to key for C> \
-oProxyCommand="ssh -i <path on A to key for B> -W %h:%p <user>#B" \
/path/to/my/file <user>#C:~/
That worked for me:
scp -o ProxyJump=USER_NAME#35.1.2.3 local-File.txt 10.1.2.3:~/
Advanced ssh from windows, not much fun at all.
I've found this working.
Create a C:\Users\u.username\.ssh\config file like:
Host jumphost.server
HostName jumphost.server
User u.username
ForwardAgent yes
IdentityFile C:\Users\u.username\.ssh\id_rsa
Host * !jumphost.server
ProxyCommand ssh.exe u.username#jumphost.server -W %h:%p
IdentityFile C:\Users\u.username\.ssh\id_rsa
(replace your data for jumphost.server, as well as your username and path to ssh private key)
Then scp from final target.server is working that way (from powershell):
scp -F .\.ssh\config u.username#target.server:/path/to/file C:\Users\u.username\
or from local windows to target linux:
scp -F .\.ssh\config C:\Users\u.username\file u.username#target.server:/path/to/file
The flag -F is loading predefined config.

Ansible percent expand

I have an ansible playbook which connects to a virtual machine via a non-standard ssh port (forwarded to localhost) and a different user than the host user (vagrant).
The ssh port is specified in the ansible inventory:
[vms]
localhost:2222
The username given on the command line to ansible-playbook:
ansible-playbook -i <inventory from above> <some playbook> -u vagrant
The communication with the VM works correctly, however, %p always expands to 22 and %r to the host username.
Consequently, I cannot flush the SSH connection (for the user's changed group membership to take effect) like this:
- name: flush the ssh connection
command: ssh -o ControlPath="~/.ansible/cp/ansible-ssh-%h-%p-%r" -O stop {{inventory_hostname}}
delegate_to: 127.0.0.1
Am I making a silly mistake somewhere? Alternatively, is there a different way to flush the SSH connection?
The percent expand is not expanded by ansible, but by ssh later on.
Sorry, forgot to add the most important part
Using
command: ssh -o ControlPath=[...] -O stop {{inventory_hostname}}
will use default port, because you didn't specify it on the command-line. You would have to specify also the port to "flush" the connection this way:
command: ssh -o ControlPath=[...] -O stop -p {{inventory_port}} {{inventory_hostname}}
But I don't think it is needed. Ansible should clean up the connections when the playbook ends and I don't see any different reason why to do that.

Call ssh-copy-id in an Ansible playbook - How to handle password prompt?

I have two servers. I manage serverA with Ansible. serverB is not managed with Ansible. I want serverA to be able to access serverB by copying the ssh_pub_key of serverA to serverB.
This can be done manually by calling ssh-copy-id user#serverB on serverA.
I want to do this with Ansible on serverA automatically.
- name: Register ssh key at serverB
command: ssh-copy-id -i /home/{{user}}/.ssh/id_rsa.pub -o StrictHostKeyChecking=no user#serverB
Calling ssh-copy-id requires me to enter my ssh password for user#serverB, so the key can be copied.
How can I do this via ansible? I want it to ask for the user#serverB password interactively while executing the playbook. Storing the password in ansible vault is also an option. Then I still do not know how to avoid the interactive password call of ssh-copy-id though.
I also added -o StrictHostKeyChecking=no to the call because this is another interaction that normally requires user interaction when calling ssh-copy-id.
If using the ssh-copy-id command is not a restriction, you might as well try out the Ansible authorized_key module.
Then your code could look something like this:
authorized_key:
user: <user>
key: "{{ lookup('file', '/home/' + lookup('env', 'USER') + '/.ssh/id_rsa.pub') }}"
You can try sshpass tool. It would require modification of your command like this:
command: sshpass -p password ssh-copy-id -i /home/{{user}}/.ssh/id_rsa.pub -o StrictHostKeyChecking=no user#serverB
but there are other options how to provide the password -- see the sshpass(1) manual page.

How to SCP a file from a 2-deep connection

Say I SSH into a server Server1 and from there SSH into server Server2 which is only accessible from a connection to Server1. Below simulates the example terminal commands for this behaviour:
[name#mylaptop]$ ssh user#Server1
user#Server1's password:
*** Welcome to Server1! ***
[user#Server1]$ ssh user2#Server2
user2#Server2's password:
*** Welcome to Server2! ***
[user2#Server2]$
Now I have a file, named file.txt in my home directory on Server2:
[user2#Server2]$ ls
file.txt
[user2#Server2]$
Is it possible to use scp to copy file.txt from Server2 onto mylaptop with a single command (i.e. not needing to first copy the file to Server1)?
In other words, can this be done easier than the following:
[name#mylaptop]$ ssh user#Server1
user#Server1's password:
*** Welcome to Server1! ***
[user#Server1]$ scp user2#Server2:~/file.txt .
user2#Server2's password:
file.txt 100% 690 0.7KB/s 00:00
[user#Server1]$ logout
Connection to Server1 closed.
[name#mylaptop]$ scp user1#Server1:~/file.txt .
user#Server1's password:
file.txt 100% 690 0.7KB/s 00:00
[name#mylaptop]$ ls
file.txt
It's possible and relatively easy, even when you need to use certificates for authentication (typical in AWS environments).
The command below will copy files from a remotePath on server2 directly into your machine at localPath. Internally the scp request is proxied via server1.
scp -i user2-cert.pem -o ProxyCommand="ssh -i user1-cert.pem -W %h:%p user1#server1" user2#server2:/<remotePath> <localpath>
If you use password authentication instead, try with
scp -o ProxyCommand="ssh -W %h:%p user1#server1" user2#server2:/<remotePath> <localpath>
If you use the same user credentials in both servers:
scp -o ProxyCommand="ssh -W %h:%p commonuser#server1" commonuser#server2:/<remotePath> <localpath>
You can use port forwarding:
Execute
ssh -L60000:Server2:22 user#Server1
in one terminal and keep this process open.
Then in another terminal run
scp -P 60000 user2#localhost:file.txt .
(You can replace 60000 by your favourite port number)
Try the answers on ServerFault :
https://serverfault.com/questions/37629/how-do-i-do-multihop-scp-transfers.
The answers cover a variety of flavours of ssh.