awk command to check whether the SSH connection to peerIP is success - ssh

The command
ssh -q -o "BatchMode=yes" user#host "echo 2>&1" && echo "OK" || echo "NOK"
will help in checking whether the SSH connection to peer IP is a success or not.
But I am having only Peer IP so
ssh -q -o "BatchMode=yes" peerIP 2>&1" && echo "OK" || echo "NOK"
doesnt work.
Anyone knows how can i solve it? A One-liner command is required and it should work on AIX, HP, Linux... any help or suggestion is very much appreciated.

Why do you mention awk ?
Whatever, here a solution that works for me:
ssh USER#HOST 'env |grep SSH_CLIENT && echo "OK" || echo "NOK"'
When SSH is connected to the host, some environment variable will be set.
Also, when your connection to a remote host without using a User in the SSH URL, your current should be set by default. You can change that behavior in your */etc/ssh/ssh_config* .
Maybe that will fix it.
Good luck

Related

How to escape ANSI format on remote SSH command?

I wanted to change the title of the window using the command as described here over SSH, however I kept get getting the error:
033]sh: Hello: command not found
Connection to host closed.
with the command:
ssh.exe user#host -t 'echo -en "\033];Hello World\007"'
No matter how I try to escape them, it seems to somehow return error. Tried:
ssh.exe user#host -t 'echo -en "\\033];Hello World\\007"'
ssh.exe user#host -t "echo -en \'\\033];Hello World\\007\'"
Any idea how to fix this?

Piping the output of ssh sudo

I sometimes have a need to run commands as root on a remote server, and parse the output of the command on my local server. The remote server does not allow root login by ssh, but has sudo configured in a way that requires a password. A simplified example of what I need to do is
ssh remote sudo echo bar | tr bar foo
(Obviously in this simplified example, there's no good reason to need to run echo on a different machine to tr: this is just a toy example to explain what I'm trying to do.)
If I run the command above, I get an error that sudo has no way to prompt for a password:
richard#local:~$ ssh remote sudo echo bar | tr bar foo
sudo: no tty present and no askpass program specified
One way I can try to fix this is by adding the -t option to ssh. If I do that, sudo does wait for and accept a password, but the output of ssh's pseudo-terminal goes to stdout, meaning the sudo prompt message is piped to tr and not displayed to the user. If the user doesn't know sudo is waiting for a password, they will think the script has hung, and passing the prompt message to the pipe probably breaks further processing:
richard#local:~$ ssh -t remote sudo echo bar | tr bar foo
[sudo] posswood foo oichood:
foo
(This admittedly silly example shows the prompt has been processed by the tr command the output is piped to.)
The other way I can see to try to fix this is by adding the -S option to sudo. If I do that, sudo prompts on stderr for the password, so the prompt is not passed down the pipeline. That's good, but sudo also accepts the password on standard input meaning it's echoed to the terminal where anyone looking over the user's shoulder can read it:
richard#local:~$ ssh remote sudo -S echo bar | tr bar foo
[sudo] password for richard: p8ssw0rd
foo
I've found inelegant ways of working around the problems with these two options, but my workarounds hit a problem if the user gets their password wrong the first time. That in itself is a problem. Examples of this are:
richard#local:~$ echo "[sudo] password for $USER:"; \
ssh -t remote sudo echo bar | tail +2 | tr bar foo
richard#local:~$ (read -s password; echo $password; echo >&2) \
| ssh remote sudo -S echo bar | tr bar foo
I'm sure there must be a good solution to this, as it doesn't seem an uncommon thing to want to do. Any ideas?
The best solution I've come up with is to use sudo -S and disable local echo so the password isn't shown as you type it:
$ { stty -echo; ssh remote sudo -S echo hello; stty echo; echo 1>&2; }
[sudo] password for user:
hello
This leaves sudo in charge of the password prompting, so it works properly if the user types the password wrong.
I don't think any solution using ssh -t can ever work properly, since it combines stderr and stdout.

SSH always exits with code 0. How to check if connection succeeded?

I am trying to check that an SSH connection succeeds (while running a script).
I am trying to use the answer from here, but the unsuccessful connection returns an error code of 0.
$ ssh -q user#downhost exit
$ echo $?
0
I have tried a variety of invalid ssh commands but they all return a 0 return code; making it hard to check for failure.
I feel I must be doing something wrong, but can't work it out. Is there some setting that I have changed maybe?
I am running CentOS 7, with the following SSH version:
$ ssh -V
OpenSSH_6.6.1p1, OpenSSL 1.0.1e-fips 11 Feb 2013
I remain confused as to why the example is failing, but the following seems to be working for me.
can_connect_ssh()
{
HOST=$1
ssh -q -o BatchMode=yes -o ConnectTimeout=5 $HOST echo connection_success | \
grep connection_success &> /dev/null
}
Called as:
if can_connect_ssh <hostname>; then
echo "Successful Connection"
else
echo "Failed Connection"
fi

I can ssh just fine, but ansible says "no route to host"

I wrote a script to run up several vms using vagrant, which I have to then provision with ansible. Unfortunately my host is a windows machine, so I thought I could solve the issue by putting all the vms into a vpn and then provision them from another machine in the same vpn.
In theory, it works... I can ssh into the other machines without trouble. But when I run my ansible playbook, ansible fails.
At first I got the message "ssh: connect to host 10.1.2.100 [10.1.2.100] port 22: No route to host" when running ansible with -vvvv
This was in the evening, and I was very tired, and this error didn't recur the following morning. Not sure if it's got something to do with the vm I'm doing deployment from being rebooted in the meantime, or the receiving machine being destroyed and uped completely since then. In any case, the problem has not gone away.
results now, after recreating both vms:
# ansible-playbook -i vms -k -u vagrant vms.yml -vvvv
result:
<10.1.2.100> ESTABLISH SSH CONNECTION FOR USER: vagrant <10.1.2.100>
SSH: EXEC sshpass -d14 ssh -C -vvv -o ServerAliveInterval=50 -o
User=vagrant -o ConnectTimeout=10 -tt 10.1.2.100 '( umask 22 && mkdir
-p "$( echo $HOME/.ansible/tmp/ansible-tmp-1455781388.36-25193904947084 )" && echo
"$( echo $HOME/.ansible/tmp/ansible-tmp-1455781388.36-25193904947084
)" )' fatal: [10.1.2.100]: FAILED! => {"failed": true, "msg": "ERROR!
Using a SSH password instead of a key is not possible because Host Key
checking is enabled and sshpass does not support this. Please add
this host's fingerprint to your known_hosts file to manage this
host."}
So far so clear. I ssh into the other instance to add it to the known hosts. This works without any trouble.
Back to ansible, I try the same command again. The result now is:
<10.1.2.100> ESTABLISH SSH CONNECTION FOR USER: vagrant <10.1.2.100>
SSH: EXEC sshpass -d14 ssh -C -vvv -o ServerAliveInterval=50 -o
StrictHostKeyChecking=no -o User=vagrant -o ConnectTimeout=10 -tt
10.1.2.100 '( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1455782149.99-271768166468916 )" &&
echo "$( echo
$HOME/.ansible/tmp/ansible-tmp-1455782149.99-271768166468916 )" )'
<10.1.2.100> PUT /tmp/tmpXQKa8Z TO
/home/vagrant/.ansible/tmp/ansible-tmp-1455782149.99-271768166468916/setup
<10.1.2.100> SSH: EXEC sshpass -d14 sftp -b - -C -vvv -o
ServerAliveInterval=50 -o StrictHostKeyChecking=no -o User=vagrant -o
ConnectTimeout=10 '[10.1.2.100]' fatal: [10.1.2.100]: UNREACHABLE! =>
{"changed": false, "msg": "ERROR! SSH Error: data could not be sent to
the remote host. Make sure this host can be reached over ssh",
"unreachable": true}
Well, I made sure the host was reachable by ssh, thank you very much! Ansible still can't get through, and I'm about to get a brain tumor from thinking of things that might be the problem.
Any suggestions what might be the problem?
This issue was reported here, with some workarounds:
https://github.com/ansible/ansible/issues/15321
The consensus seems to be either to a. use ansible_password or b. use -u username in the connection parameters. However, any number of things can disrupt an SSH connection in ways that make it look "unreachable" to higher level apps, so I recommend going through each of the steps outlined in that ticket.

How to automatically start tmux on SSH session?

I have ten or so servers that I connect to with SSH on a regular basis. Each has an entry in my local computer's ~/.ssh/config file.
To avoid losing control of my running process when my Internet connection inevitably drops, I always work inside a tmux session. I would like a way to have tmux automatically connect every time an SSH connection is started, so I don't have to always type tmux attach || tmux new after I SSH in.
Unfortunately this isn't turning out to be as simple as I originally hoped.
I don't want to add any commands to the ~/.bashrc on the servers because I only want it for SSH sessions, not local sessions.
Adding tmux attach || tmux new to the ~/.ssh/rc on the servers simply results in the error not a terminal being thrown after connection, even when the RequestTTY force option is added to the line for that server in my local SSH config file.
Server-side configuration:
To automatically start tmux on your remote server when ordinarily logging in via SSH (and only SSH), edit the ~/.bashrc of your user or root (or both) on the remote server accordingly:
if [[ $- =~ i ]] && [[ -z "$TMUX" ]] && [[ -n "$SSH_TTY" ]]; then
tmux attach-session -t ssh_tmux || tmux new-session -s ssh_tmux
fi
This command creates a tmux session called ssh_tmux if none exists, or reattaches to a already existing session with that name. In case your connection dropped or when you forgot a session weeks ago, every SSH login automatically brings you back to the tmux-ssh session you left behind.
Connect from your client:
Nothing special, just ssh user#hostname.
Don't do this on the server-side!
That is potentially dangerous because you can end up being locked-out of the remote machine. And no shell hacks / aliases / etc. are required, either.
Instead...
... make use of (your client's) ~/.ssh/config like so:
tmux 3.1 or newer¹ on the remote machine
Into your local ~/.ssh/config, put²:
Host myhost
Hostname host
User user
RequestTTY yes
RemoteCommand tmux new -A -s foobar
As pointed out by #thiagowfx, this has the side effect of making it impossible to use, e.g. ssh myhost ls /tmp and should therefore not be used with Host * ... what I like to do is to have a Host myhost section with RemoteCommand tmux ... and then in addition to that I'll have a Host MYHOST section without it.
Instead of RequestTTY yes you could call ssh with the -t switch; thank you, #kyb.
Off-topic, but if you're dealing with non-ASCII characters, I'd recommend to change that into tmux -u … for explicitly enabling Unicode support even on machines that don't have the proper environment variables set.
tmux 3.0a or older on the remote machine
Almost the same as above, but change the last line to³:
RemoteCommand tmux at -t foobar || tmux new -s foobar
¹ repology.org has a list of distros and their tmux versions
² new is short for new-session.
³ at is short for attach-session.
Only if, for some reason, you really, really can't do it client-side:
Using the remote's authorized_keys file
If you would rather not have an ~/.ssh/config file for whatever reason, or want the remote machine to force the connecting machine to connect to / open the session, add this to your remote ~/.ssh/authorized_keys:
command="tmux at -t foobar || tmux new -s foobar" pubkey user#client
This will, of course, work from all clients having the corresponding private key installed, which some might consider an upside –– but: should anything go wrong, it might not be possible to connect anymore without (semi-)physical access to the machine!
One caveat!
As #thiagowfx notes in the comments, this should not be put underneath Host * as it breaks certain things, such as git push. What I personally do is to add a second entry in all-uppercase letters for where I want to automatically be connected to tmux.
Alright, I found a mostly satisfactory solution. In my local ~/.bashrc, I wrote a function:
function ssh () {/usr/bin/ssh -t "$#" "tmux attach || tmux new";}
which basically overwrites the ssh terminal function to call the built-in ssh program with the given arguments, followed by "tmux attach || tmux new".
(The $# denotes all arguments provided on the command line, so ssh -p 123 user#hostname will be expanded to ssh -t -p 123 user#hostname "tmux attach || tmux new")
(The -t argument is equivalent to RequestTTY Force and is necessary for the tmux command.)
Connect:
ssh user#host -t "tmux new-session -s user || tmux attach-session -t user"
During session:
Use Ctrl+d to finish session (tmux window closes) or Ctrl+b d to temporary detach from session and connect to it again later.
Remember! If your server restarted session lost!
When you are inside tmux anytime you can use Ctrl+b s to see sessions list and switch current to another.
Fix your .bashrc:
I recommend you to define universal function in your .bashrc:
function tmux-connect {
TERM=xterm-256color ssh -p ${3:-22} $1#$2 -t "tmux new-session -s $1 || tmux attach-session -t $1"
}
It uses 22 port by default. Define your fast-connect aliases too:
alias office-server='tmux-connect $USER 192.168.1.123'
alias cloud-server='tmux-connect root my.remote.vps.server.com 49281'
Login without password:
And if you don't want to type password everytime than generate .ssh keys to login automatically:
ssh-keygen -t rsa
eval "$(ssh-agent -s)" && ssh-add ~/.ssh/id_rsa
Put your public key to the remote host:
ssh-copy-id -p <port> user#hostname
Additional tips:
If you want to use temporary session-id which corresponds with a local bash session use as tmux id:
SID=$USER-$BASHPID
ssh user#host -t "tmux new-session -s $SID || tmux attach-session -t $SID"
I used lines from #kingmeffisto (I'm not allowed to comment that answer) and I added an exit so terminating tmux also terminates the ssh connection. This however broke SFTP sessions so I had to check for $SSH_TTY instead of $SSH_CONNECTION.
EDIT 4/2018: Added test for interactive terminal via [[ $- =~ i ]] to allow tools like Ansible to work.
if [ -z "$TMUX" ] && [ -n "$SSH_TTY" ] && [[ $- =~ i ]]; then
tmux attach-session -t ssh || tmux new-session -s ssh
exit
fi
As described in this blog post you can ssh and then attach to an existing tmux session with a single command:
ssh hostname -t tmux attach -t 0
This is the one that actually creates a great user-experience.
It automatically starts tmux whenever you open the terminal (both physically and ssh).
You can start your work on one device, exit the terminal, and resume on the other one. If it detects someone already attached to the session it will create new session.
Put it on the server, depending on your shell ~/.zshrc or ~/.bashrc.
if [[ -z "$TMUX" ]] ;then
ID="$( tmux ls | grep -vm1 attached | cut -d: -f1 )" # get the id of a deattached session
if [[ -z "$ID" ]] ;then # if not available attach to a new one
tmux new-session
else
tmux attach-session -t "$ID" # if available attach to it
fi
fi
I have the following solution that gives you two SSH hosts to connect to: one with tmux, one without:
# Common rule that 1) copies your tmux.conf 2) runs tmux on the remote host
Host *-tmux
LocalCommand scp %d/.tmux.conf %r#%n:/home/%r/
RemoteCommand tmux new -As %r
RequestTTY yes
PermitLocalCommand yes
# Just connect.
# Notice the asterisk: makes possible to re-use connection parameters
Host example.com*
HostName example.com
User login
# Connect with tmux
Host example.com-tmux
HostKeyAlias dev.dignio.com
You might find this useful - uses ssh in a loop and reconnects to or connects to an existing tmux session so you have a nice easy reliable
way to reconnect after a network outage
#!/bin/bash
#
# reconnect to or spawn a new tmux session on the remote host via ssh.
# If the network connection is lost, ssh will reconnect after a small
# delay.
#
SSH_HOSTNAME=$1
TMUX_NAME=$2
PORT=$3
if [[ "$PORT" != "" ]]
then
PORT="-p $PORT"
fi
if [ "$TMUX_NAME" = "" ]
then
SSH_UNIQUE_ID_FILE="/tmp/.ssh-UNIQUE_ID.$LOGNAME"
if [ -f $SSH_UNIQUE_ID_FILE ]
then
TMUX_NAME=`cat $SSH_UNIQUE_ID_FILE`
TMUX_NAME=`expr $TMUX_NAME + $RANDOM % 100`
else
TMUX_NAME=`expr $RANDOM % 1024`
fi
echo $TMUX_NAME > $SSH_UNIQUE_ID_FILE
TMUX_NAME="id$TMUX_NAME"
fi
echo Connecting to tmux $TMUX_NAME on hostname $SSH_HOSTNAME
SLEEP=0
while true; do
ssh $PORT -o TCPKeepAlive=no -o ServerAliveInterval=15 -Y -X -C -t -o BatchMode=yes $SSH_HOSTNAME "tmux attach-session -t $TMUX_NAME || tmux -2 -u new-session -s $TMUX_NAME"
SLEEP=10
if [ $SLEEP -gt 0 ]
then
echo Reconnecting to session $TMUX_NAME on hostname $SSH_HOSTNAME in $SLEEP seconds
sleep $SLEEP
fi
done
byobu is a nice useful wrapper for tmux/screen. Connects to an existing session if present or creates a new one.
I use it with autossh which gracefully reconnects the ssh session. Highly recommended in case of intermittent connectivity issues.
function ssh-tmux(){
if ! command -v autossh &> /dev/null; then echo "Install autossh"; fi
autossh -M 0 $* -t 'byobu || {echo "Install byobu-tmux on server..."} && bash'
}
I know I'm reviving an old thread but I've done some work on the bashrc solution and I think it has some use:
#attach to the next available tmux session that's not currently occupied
if [[ -z "$TMUX" ]] && [ "SSH_CONNECTION" != "" ];
then
for i in `seq 0 10`; do #max of 10 sessions - don't want an infinite loop until we know this works
SESH=`tmux list-clients -t "$USER-$i-tmux" 2>/dev/null` #send errors to /dev/null - if the session doesn't exist it will throw an error, but we don't care
if [ -z "$SESH" ] #if there's no clients currently connected to this session
then
tmux attach-session -t "$USER-$i-tmux" || tmux new-session -s "$USER-$i-tmux" #attach to it
break #found one and using it, don't keep looping (this will actually run after tmux exits AFAICT)
fi #otherwise, increment session counter and keep going
done
fi
There's a cap at 10 (11) sessions for now - I didn't want to kill my server with an infinite loop in bashrc. It seems to work pretty reliably, other than the error of tmux failing on list-clients if the session doesn't exist.
Thie way allows you to reconnect to an old tmux instance if your ssh session drops. The exec saves a fork of course.
if [ -z "$TMUX" ]; then
pid=$(tmux ls | grep -vm1 "(attached)" | cut -d: -f1)
if [ -z "$pid" ]; then
tmux new -d -s $pid
fi
exec tmux attach -t $pid
fi
Append to bottom of your remote server's ~/.bashrc, (or possibly its /etc/.bashrc.shared (1)):
# ======================== PUT THIS LAST IN .BASHRC ==========================
# --- If we're run by SSH, then auto start `tmux` and possibly re-attach user.
# $- interactive only via current option flags
# -z $TMUX no tmux nesting
# $SSH_TTY SSH must be running, and in a shell
#
if [[ $- == *i* ]] && [[ -z "$TMUX" ]] && [[ -n "$SSH_TTY" ]]; then
tmux attach-session -t "$USER" || tmux new-session -s "$USER" && exit
fi
Many good tips above combined here, e.g. $- and $SSH_TTY are better I think.
And I like adding a few comments to help this old guy remember what's going on without having to look it up.
And finally, I like an exit at the end to cleanly come home when I'm done.
Thanks everyone.
Note I source a shared /etc/.bashrc.shared at the end of both user and root's .bashrc's, for common stuff used in both, like colorized ls, various aliases, functions and path extensions, i.e. I don't want redundant code in my root/.bashrc nor user/.bashrc.
This guys script works great. Just copy the bashrc-tmux file to ~/.bashrc-tmux and source it from ~/.bashrc right after the PS1 check section.