ssh password replay using ~/.ssh/config - ssh

We are currently doing a POC where, ssh-key pairs is not allowed, meaning, we have to use password with strict 90 days password expiration imposed. So, as part of POC, assume the username is "acme", which is, we have to log into "bastion.example.com" host (ssh acme#bastion.example.com), after logging into bastion, again we have to log into target host, yes - from bastion, we run "ssh acme#machine.example.com".
Question mark - using ~/.ssh/config, how do we achieve this especially using "password replay" so that we dont have to provide password twice. With this, we can easily pass the script or command to be executed on the target host (using proxyCommand and remoteCommand).
Please share an example where we can perform "password replay".

Related

How to check SSH credentials are working or not

I have a large number of devices around 300
I have different creds to them
SSH CREDS, API CREDS
So as I cannot manually SSH to all those devices and check the creds are working or not
I am thinking of writing a script and pass the device IP's to the script and which gives me as yes as a result if the SSH creds are working and NO if not working.
I am new to all this stuff! details will be appreciated!
I will run this script on a server from where I can ssh to all the devices.
Your question isn't clear as to what sort of credentials you use for connecting to each host: do all hosts have the same connection method, for instance?
Let's assume that you use ssh's authorised keys method to log in to each host (i.e. you have a public key on each host within the ~/.ssh/authorized_keys file). You can run ssh with a do nothing command against each host and look at the exit code to see if the connection was successful.
HOST=1.2.3.4
ssh -i /path/to/my/private.key user#${HOST} true > /dev/null 2>&1
if [ $? -ne 0]; then echo "Error, could not connect to ${HOST}"; fi
Now it's just a case of wrapping this in some form of loop where you cycle through each host (and choose the right key for each host, perhaps you could name each private key after the name or IP address of the target host). The script will go out all those hosts for which a connection was not possible. Note that this script assumes that true is available on the target host, otherwise you could use ls or similar. We pipe all output to /dev/null/ as we're only interested in the ability to connect.
EDIT IN RESPONSE TO OP CLARIFICATION:
I'd strongly recommend not using username/password for login, as the username and password will likely be held in your script somewhere, or even in your shell history, if you run the command from the command line. If you must do this, then you could use expect or sshpass, as detailed here: https://srvfail.com/how-to-provide-ssh-password-inside-a-script-or-oneliner/
The ssh command shown does not spawn a shell, it literally logs in to the remote server, executes the command true (or ls, etc), then exits. You can use the return code ($? in bash) to check whether the command executed correctly. My example shows it printing out an error message for non-zero return codes, but to print out YES on successful connection, you could do this:
if [ $? -eq 0]; then echo "${HOST}: YES"; fi

Normal gitlab user with working keys cannot use PubkeyAuthentication to login to bash shell prompt

On an Ubuntu server, 'foo.com', that serves gitlab, a gitlab user, 'bar', can clone, push, and pull without having to use a password, with no problem (public key is set up on the gitlab server for user 'bar').
User 'bar' wants to use the command line on the server 'foo', and does ssh bar#foo.com. When user 'bar's ssh keys are not in 'foo''s authorized_keys, 'bar' is logged momentarily into Gitlab:
debug2: shell request accepted on channel 0
Welcome to GitLab, bar
and then that session promptly exits.
When user 'bar's ssh key - even one that is not registered with GitLab - is in 'foo.com''s authorized_keys, then that user gets the expected result when doing ssh bar#foo.com. However, then user bar (on their local computer) is unable to push, pull, clone, etc. from their gitlab-managed repository, with the error message being that "'some-group/some-project.git' does not appear to be a git repository".
It appears that there is a misconfiguration such that shell access is mixed up with gitlab project access.
How can user 'foo' be able to both login via ssh to a regular shell prompt and also use git normally (interacting with the remote git server from their local box)?
After a lot of searching I got to know why this was happening on my end. I had the same issue. I wanted to use the same SSH key for both SSH login as well as GitLab access.
I found this thread helpful:
https://gist.github.com/hanseartic/368a63933afb7c9f7e6b
In the authorized_keys file, the gitlab-shell enters specific commands to limit the access. It adds the limitation once the user enters the public key through web interface. It uses the command option to do so.
We would need to modify the command option to allow access to bash and remember to remove the option of no-pty if listed in the comma-separated section. For example in my case I had this within the line: no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty and had to remove no-pty from the list.
A sample modified command should look like this:
command="if [ -t 0 ]; then bash; else /home/ec2-user/gitlab_service/gitlab-shell/bin/gitlab-shell key-11; fi",no-port-forwarding,no-X11-forwarding,no-agent-forwarding ssh-rsa AAA...
Need to be mindful to edit the correct command by checking the key number or the publickey and username associated with the command.
This did not require any service restart.

Connect through ssh and scp and type in password automatically

I know this question has already been asked several times but I got another problem. I have a part in my script where I connect through ssh and scp and everytime I run the script it always ask for the password. Most of you would probably answer that I should use expect or sshpass yet I don't have any of this two. I tried running:
compgen -c
and there's no expect and sshpass existing.
Are there any alternative commands? I would really appreciate your help. Thanks
Update: I also can't install any of this since I'm only an ordinary user.
First I logged in to server A as testuser and entered the ff command:
ssh-keygen -d
Do not enter any passphrase.
This will generate files in the folder ~/.ssh/
Then scp the file rsa_id.pub (public key) to server B.
scp ~/.ssh/id_dsa.pub testuser#B:/home/testuser/.ssh/authorized_keys2
Do the same vice versa (if you want access to both). Then you can now transfer from one server to the other without the being asked for your password.
source
If you don't want to set up keys for passwordless access (against the rules?), you can set up "SSH connection sharing".
Insert these lines into your .ssh/config file:
ControlMaster auto
ControlPath /tmp/ssh_%r#%n:%p
ControlPersist 8h
Now, when you log into a server from the machine with that config it will ask you your password the first time, and won't ask again until 8 hours of idle time have passed (so, you'll get asked once per day, usually).
What it's doing is keeping the connection open in the background, and then reusing the same connection for all your SSH sessions. This gives a useful connect-speed boost, and means you don't need to re-authenticate. All-in-all, it's great for accelerating scripted SSH and SCP commands.

script to ssh to a unix server

It will be helpful if somebody could tell me how to connect to a unix server using username and password as arguments.my username and password is "anitha".
How can i create a shell script which automatically connect to my unix server with this username and password?
I guess you want to remotely connect to your *nix server from network. Base on my guess, to:
connect to remote *nix server, everybody is using SSH
ssh anitha#anitha ip-to-unix-server
automatically connect, write simple bash shell wrap around your ssh connect command and do something, not suggested, you should use ssh password less login (aka public/private key)
#!/usr/bin/env bash
ip=172.16.0.1 #replace 172.16.0.1 with your unix server's ip
username=anitha #your ssh username
password=anitha #your ssh password
command=who #what do you want to do with remote server
arguments= #arguments for your command
expect -c 'spawn ssh $username#$ip ; expect password ; send "$password\n" ; interact'
connect without typing password, you may need to use SSH password less login
Use sshpass if you really need to use non-interactive keyboard-interactive authentication (pun intended) or better switch to using pubkey-based authentication.
Note that passing the password in clear to the ssh client is very lame as the password gets exposed in the publicly-readable process list where it can be read by anyone. sshpass works around this problem by creating a pseudo-terminal and communicating with the ssh client using it, so at least the password is not exposed at runtime.
Step 1:
jsmith#local-host$ [Note: You are on local-host here]
jsmith#local-host$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/jsmith/.ssh/id_rsa):[Enter key]
Enter passphrase (empty for no passphrase): [Press enter key]
Enter same passphrase again: [Pess enter key]
Your identification has been saved in /home/jsmith/.ssh/id_rsa.
Your public key has been saved in /home/jsmith/.ssh/id_rsa.pub.
The key fingerprint is:
33:b3:fe:af:95:95:18:11:31:d5:de:96:2f:f2:35:f9 jsmith#local-host
Step 2:
From Local-host, run this One liner for password less ssh connectivity.
cat ~/.ssh/id_dsa.pub | ssh useronanotherserver#anotherservername 'cat >> ~/.ssh/authorized_keys'
You should use expect, which is an extension of tcl that was made specifically for automating login tasks.
Basic ssh login question: could not able to spawn(ssh) using expect
How to interact with the server programattically after you have established the session: Expect Script to Send Different String Outputs

ssh: The authenticity of host 'hostname' can't be established

When i ssh to a machine, sometime i get this error warning and it prompts to say "yes" or "no". This cause some trouble when running from scripts that automatically ssh to other machines.
Warning Message:
The authenticity of host '<host>' can't be established.
ECDSA key fingerprint is SHA256:TER0dEslggzS/BROmiE/s70WqcYy6bk52fs+MLTIptM.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'pc' (ECDSA) to the list of known hosts.
Is there a way to automatically say "yes" or ignore this?
Depending on your ssh client, you can set the StrictHostKeyChecking option to no on the command line, and/or send the key to a null known_hosts file. You can also set these options in your config file, either for all hosts or for a given set of IP addresses or host names.
ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
EDIT
As #IanDunn notes, there are security risks to doing this. If the resource you're connecting to has been spoofed by an attacker, they could potentially replay the destination server's challenge back to you, fooling you into thinking that you're connecting to the remote resource while in fact they are connecting to that resource with your credentials. You should carefully consider whether that's an appropriate risk to take on before altering your connection mechanism to skip HostKeyChecking.
Reference.
Old question that deserves a better answer.
You can prevent interactive prompt without disabling StrictHostKeyChecking (which is insecure).
Incorporate the following logic into your script:
if [ -z "$(ssh-keygen -F $IP)" ]; then
ssh-keyscan -H $IP >> ~/.ssh/known_hosts
fi
It checks if public key of the server is in known_hosts. If not, it requests public key from the server and adds it to known_hosts.
In this way you are exposed to Man-In-The-Middle attack only once, which may be mitigated by:
ensuring that the script connects first time over a secure channel
inspecting logs or known_hosts to check fingerprints manually (to be done only once)
To disable (or control disabling), add the following lines to the beginning of /etc/ssh/ssh_config...
Host 192.168.0.*
StrictHostKeyChecking=no
UserKnownHostsFile=/dev/null
Options:
The Host subnet can be * to allow unrestricted access to all IPs.
Edit /etc/ssh/ssh_config for global configuration or ~/.ssh/config for user-specific configuration.
See http://linuxcommando.blogspot.com/2008/10/how-to-disable-ssh-host-key-checking.html
Similar question on superuser.com - see https://superuser.com/a/628801/55163
Make sure ~/.ssh/known_hosts is writable. That fixed it for me.
The best way to go about this is to use 'BatchMode' in addition to 'StrictHostKeyChecking'. This way, your script will accept a new hostname and write it to the known_hosts file, but won't require yes/no intervention.
ssh -o BatchMode=yes -o StrictHostKeyChecking=no user#server.example.com "uptime"
This warning is issued due the security features, do not disable this feature.
It's just displayed once.
If it still appears after second connection, the problem is probably in writing to the known_hosts file.
In this case you'll also get the following message:
Failed to add the host to the list of known hosts
You may fix it by changing owner of changing the permissions of the file to be writable by your user.
sudo chown -v $USER ~/.ssh/known_hosts
Edit your config file normally located at '~/.ssh/config', and at the beggining of the file, add the below lines
Host *
User your_login_user
StrictHostKeyChecking no
IdentityFile ~/my_path/id_rsa.pub
User set to your_login_user says that this settings belongs to your_login_user
StrictHostKeyChecking set to no will avoid the prompt
IdentityFile is path to RSA key
This works for me and my scripts, good luck to you.
Ideally, you should create a self-managed certificate authority. Start with generating a key pair:
ssh-keygen -f cert_signer
Then sign each server's public host key:
ssh-keygen -s cert_signer -I cert_signer -h -n www.example.com -V +52w /etc/ssh/ssh_host_rsa_key.pub
This generates a signed public host key:
/etc/ssh/ssh_host_rsa_key-cert.pub
In /etc/ssh/sshd_config, point the HostCertificate to this file:
HostCertificate /etc/ssh/ssh_host_rsa_key-cert.pub
Restart the sshd service:
service sshd restart
Then on the SSH client, add the following to ~/.ssh/known_hosts:
#cert-authority *.example.com ssh-rsa AAAAB3Nz...cYwy+1Y2u/
The above contains:
#cert-authority
The domain *.example.com
The full contents of the public key cert_signer.pub
The cert_signer public key will trust any server whose public host key is signed by the cert_signer private key.
Although this requires a one-time configuration on the client side, you can trust multiple servers, including those that haven't been provisioned yet (as long as you sign each server, that is).
For more details, see this wiki page.
Do this -> chmod +w ~/.ssh/known_hosts. This adds write permission to the file at ~/.ssh/known_hosts. After that the remote host will be added to the known_hosts file when you connect to it the next time.
With reference to Cori's answer, I modified it and used below command, which is working. Without exit, remaining command was actually logging to remote machine, which I didn't want in script
ssh -o StrictHostKeyChecking=no user#ip_of_remote_machine "exit"
Add these to your /etc/ssh/ssh_config
Host *
UserKnownHostsFile=/dev/null
StrictHostKeyChecking=no
Generally this problem occurs when you are modifying the keys very oftenly. Based on the server it might take some time to update the new key that you have generated and pasted in the server. So after generating the key and pasting in the server, wait for 3 to 4 hours and then try. The problem should be solved. It happened with me.
The following steps are used to authenticate yourself to the host
Generate a ssh key. You will be asked to create a password for the key
ssh-keygen -f ~/.ssh/id_ecdsa -t ecdsa -b 521
(above uses the recommended encryption technique)
Copy the key over to the remote host
ssh-copy-id -i ~/.ssh/id_ecdsa user#host
N.B the user # host will be different to you. You will need to type in the password for this server, not the keys password.
You can now login to the server securely and not get an error message.
ssh user#host
All source information is located here:
ssh-keygen
For anyone who finds this and is simply looking to prevent the prompt on first connection, but still wants ssh to strictly check the key on subsequent connections (trust on first use), you can set StrictHostKeyChecking to accept-new in ~/.ssh/config, which will do what you're looking for. You can read more about it in man ssh_config. I strongly discourage disabling key checking altogether.
Run this in host server it's premonition issue
chmod -R 700 ~/.ssh
I had the same error and wanted to draw attention to the fact that - as it just happened to me - you might just have wrong privileges.You've set up your .ssh directory as either regular or root user and thus you need to be the correct user. When this error appeared, I was root but I configured .ssh as regular user. Exiting root fixed it.
This is trying to establish password-less authentication. So, if you try to run that command manually once, it will ask to provide the password there. After entering password, it saves that password permanently, and it will never ask again to type 'yes' or 'no'.
For me the reason is that I have wrong permission on ~/.ssh/known_hosts.
I have no write permission on known_hosts file. So it ask me again and again.
In my case, the host was unkown and instead of typing yes to the question are you sure you want to continue connecting(yes/no/[fingerprint])? I was just hitting enter .
I solve the issue which gives below written error:
Error:
The authenticity of host 'XXX.XXX.XXX' can't be established.
RSA key fingerprint is 09:6c:ef:cd:55:c4:4f:ss:5a:88:46:0a:a9:27:83:89.
Solution:
1. install any openSSH tool.
2. run command ssh
3. it will ask for do u add this host like.
accept YES.
4. This host will add in the known host list.
5. Now you are able to connect with this host.
This solution is working now......