how do i reduce timeout on unix telnet on connection - telnet

I have a unix shell script which test ftp ports of multiple hosts listed in a file.
for i in `cat ftp-hosts.txt`
do
echo "QUIT" | telnet $i 21
done
In general this scripts works, however if i encounter a host which does not connect, i.e telnet is "Trying...", how can I reduce this wait time so it can test the next host ?

Have you tried using netcat (nc) instead of telnet? It has more flexibility, including being able to set the timeout:
echo 'QUIT' | nc -w SECONDS YOUR_HOST PORT
# e.g.
echo "QUIT" | nc -w 5 localhost 21
The -w 5 option will timeout the connection after 5 seconds.

Try using timeout3 script is very robust and I used a lot without problems on different situations.
Example to wait just 3 seconds trying to check if ssh port is open.
> echo QUIT > quit.txt
> ./timeout3 -t 3 telnet HOST 22 < quit.txt
outputs: you can grep for "Connected" or "Terminated"
timeout3 file contents:
#
#!/bin/bash
#
# The Bash shell script executes a command with a time-out.
# Upon time-out expiration SIGTERM (15) is sent to the process. If the signal
# is blocked, then the subsequent SIGKILL (9) terminates it.
#
# Based on the Bash documentation example.
# If you find it suitable, feel free to include
# anywhere: the very same logic as in the original examples/scripts, a
# little more transparent implementation to my taste.
#
# Dmitry V Golovashkin <Dmitry.Golovashkin#sas.com>
scriptName="${0##*/}"
declare -i DEFAULT_TIMEOUT=9
declare -i DEFAULT_INTERVAL=1
declare -i DEFAULT_DELAY=1
# Timeout.
declare -i timeout=DEFAULT_TIMEOUT
# Interval between checks if the process is still alive.
declare -i interval=DEFAULT_INTERVAL
# Delay between posting the SIGTERM signal and destroying the process by SIGKILL.
declare -i delay=DEFAULT_DELAY
function printUsage() {
cat <<EOF
Synopsis
$scriptName [-t timeout] [-i interval] [-d delay] command
Execute a command with a time-out.
Upon time-out expiration SIGTERM (15) is sent to the process. If SIGTERM
signal is blocked, then the subsequent SIGKILL (9) terminates it.
-t timeout
Number of seconds to wait for command completion.
Default value: $DEFAULT_TIMEOUT seconds.
-i interval
Interval between checks if the process is still alive.
Positive integer, default value: $DEFAULT_INTERVAL seconds.
-d delay
Delay between posting the SIGTERM signal and destroying the
process by SIGKILL. Default value: $DEFAULT_DELAY seconds.
As of today, Bash does not support floating point arithmetic (sleep does),
therefore all delay/time values must be integers.
EOF
}
# Options.
while getopts ":t:i:d:" option; do
case "$option" in
t) timeout=$OPTARG ;;
i) interval=$OPTARG ;;
d) delay=$OPTARG ;;
*) printUsage; exit 1 ;;
esac
done
shift $((OPTIND - 1))
# $# should be at least 1 (the command to execute), however it may be strictly
# greater than 1 if the command itself has options.
if (($# == 0 || interval <= 0)); then
printUsage
exit 1
fi
# kill -0 pid Exit code indicates if a signal may be sent to $pid process.
(
((t = timeout))
while ((t > 0)); do
sleep $interval
kill -0 $$ || exit 0
((t -= interval))
done
# Be nice, post SIGTERM first.
# The 'exit 0' below will be executed if any preceeding command fails.
kill -s SIGTERM $$ && kill -0 $$ || exit 0
sleep $delay
kill -s SIGKILL $$
) 2> /dev/null &
exec "$#"
#

if you have nmap
nmap -iL hostfile -p21 | awk '/Interesting/{ip=$NF}/ftp/&&/open/{print "ftp port opened for: "ip}'

Use start a process to sleep and kill the telnet process. Roughly:
echo QUIT >quit.txt
telnet $i 21 < quit.txt &
sleep 10 && kill -9 %1 &
ex=wait %1
kill %2
# Now check $ex for exit status of telnet. Note: 127 inidicates success as the
# telnet process completed before we got to the wait.
I avoided the echo QUIT | telnet pipeline to leave no ambiguity when it comes to the exit code of the first job.
This code has not been tested.

Use timeout in order to quit in x seconds whether the operation succeed or fails:
timeout runs a command with a time limit , Start COMMAND, and kill it
if still running after DURATION.
Formula:
timeout <seconds> <operation>
example:
timeout 5 ping google.com
your example:
for i in `cat ftp-hosts.txt`
do
timeout 5 telnet $i 21
done

Related

How to prevent nested tmux session in a new login shell?

Almost all solutions (1, 2) to set tmux run at shell startup depend on some environment variables like $TMUX, $TERM etc. But when we start a login shell such as by su -, all variables are cleared except $TERM. So we can rely on $TERM to avoid starting nested sessions. Let's say the default $TERM is xterm and we set screen in .tmux.conf to identify we are in a TMUX session. This works fine for local login.
Now, two machines A and B use same rule to control nested sessions and we are in a tmux session on machine A. When we login remotely (through ssh) from A to B, tmux session won't start on B because $TERM is already set to screen.
So, isn't there a way to find out that we are already in a tmux session without depending on environment variables?
PS:
I'm posting a workaround as answer that I use to achieve the above said behavior. But a more accurate and better method such as that may work using tmux commands will be much appreciated.
This solution works by finding out if current terminal is connected to a tmux server running on the same machine. In order to find out the connection, we will make use of a pseudoterminal pair and I/O statistics hack.
However it may fail if /procfs or /dev files are not read-able / write-able by the user. For instance if tmux server was launched by root user, a non-root user won't be able to find it.
Also, we may get false positives if tmux server is receiving data from some other source at the same time we are trying to write zeros to it.
Put this at the end of .bashrc or other shell startup file you want:
# ~/.bashrc
# don't waste time if $TMUX environemnt variable is set
[ -z $TMUX ] || return
# don't start a tmux session if current shell is not connected to a terminal
pts=$(tty) || return
# find out processes connected to master pseudoterminal
for ptm in $(fuser /dev/ptmx 2>/dev/null)
do
# ignore process if it's not a tmux server
grep -q tmux /proc/$ptm/comm || continue
# number of bytes already read by tmux server
rchar_old=$(awk '/rchar/ {print $2}' /proc/$ptm/io)
# write out 1000 bytes to current slave pseudoterminal terminal
dd bs=1 count=1000 if=/dev/zero of=$pts &>/dev/null
# read number of bytes again and find difference
diff=$(( $(awk '/rchar/ {print $2}' /proc/$ptm/io) - rchar_old ))
# if it equals 1000, current terminal is connected to tmux server
# however diff comes greater than 1000 most of the times
[ $diff -ge 1000 ] && return
done
# start or attach to a tmux session
echo 'Press any key to interrupt tmux session.'
read -st1 key && return
# connect to a detached session if exists for current user
session=($(tmux list-sessions 2>/dev/null | sed -n '/(attached)/!s/:.*r//p'))
[ -z $session ] || exec tmux a -t ${session[0]}
# start a new session after all
exec tmux

Enable keepalives in Plink

We are using Plink for a tunnel to a MySQL. We are using it in this format:
plink.exe -L [Port of our client]:[my-sql server host name]:3306 [bridge server ssh username]#[bridge server IP] -i [private key]
We cannot find an option to prevent the connection to be closed, a sort of keepalive.
How could we achieve this?
Instead of a keepalive that plink manages internally, another option is to use the shell that is created on the host to keep sending short bits of data on the wire. This can be done through a very simple shell script such as:
while true;
do echo 0;
sleep 30s;
done
This very simple bash script will write the character 0 every 30 seconds to the screen.
A full example of the whole command line when invoking plink:
plink -P 443 [user#]host.com -R *:80:127.0.0.1:80 -C -T while true; do echo 0; sleep 30s; done
Plink does not have any command-line option for keepaliaves.
All you can do is to configure a stored session in PuTTY GUI with the keepalive on and then re-use the session in Plink using -load switch.

autossh quits because ssh (dropbear) can't resolve host

I run autossh on a system which might have internet connectivity or might not. I don't really know when it has a connection but if so I want autossh to establish a ssh tunnel by:
autossh -M 2000 -i /etc/dropbear/id_rsa -R 5022:localhost:22 -R user#host.name -p 6022 -N
After several seconds it throws:
/usr/bin/ssh: Exited: Error resolving 'host.name' port '6022'. Name or service not known
And thats it. Isn't autossh meant to keep the ssh process running no matter what? Do I really have to check for a connection by ping or so?
You need to set the AUTOSSH_GATETIME environment variable to 0. From autossh(1):
Startup behaviour
If the ssh session fails with an exit status of 1 on the very first try, autossh
1. will assume that there is some problem with syntax or the connection setup,
and will exit rather than retrying;
2. There is a "starting gate" time. If the first ssh process fails within the
first few seconds of being started, autossh assumes that it never made it
"out of the starting gate", and exits. This is to handle initial failed
authentication, connection, etc. This time is 30 seconds by default, and can
be adjusted (see the AUTOSSH_GATETIME environment variable below). If
AUTOSSH_GATETIME is set to 0, then both behaviours are disabled: there is no
"starting gate", and autossh will restart even if ssh fails on the first run
with an exit status of 1. The "starting gate" time is also set to 0 when the
-f flag to autossh is used.
AUTOSSH_GATETIME
Specifies how long ssh must be up before we consider it a successful connecā€
tion. The default is 30 seconds. Note that if AUTOSSH_GATETIME is set to 0,
then not only is the gatetime behaviour turned off, but autossh also ignores
the first run failure of ssh. This may be useful when running autossh at
boot.
Usage:
AUTOSSH_GATETIME=0 autossh -M 2000 -i /etc/dropbear/id_rsa -R 5022:localhost:22 -R user#host.name -p 6022 -N

How to allow entering password when issuing commands on remote server?

Here's my script:
#!/bin/sh
if [ $# -lt 1 ]; then
echo "Usage: $0 SERVER"
exit 255
fi
ssh $1 'su;apachectl restart'
I just want it to log in to the server and restart apache, but it needs super-user priviledges to do that. However, after it issues the su command it doesn't stop and wait for me to enter my password. Can I get it to do that?
See if this works for you! This solution takes the password before doing the SSH though...
#!/bin/sh
if [ $# -lt 1 ]; then
echo "Usage: $0 SERVER"
exit 255
fi
read -s -p "Password: " PASSWORD
ssh $1 'echo $PASSWORD>su;apachectl restart'
The -s option prevents echoing of the password while reading it from the user.
Take a look at expect. The best way to perform such operations is through an expect script. Here is a sample that I just typed up to give you a head start, but all it does right now is show you how to handle a password in expect.
#!/usr/bin/expect -f
set timeout 60
set pswd [lindex $argv 0]
send "su\r"
match_max 2000
expect {
-re "assword:$" {
sleep 1
set send_human {.2 .15 .25 .2 .25}
send -h -- "$pswd\r"
exp_continue
} "login failed." {
send "exit\r"
log_user 1
send_user "\r\nLogin failed.\r\n"
exit 4
} timeout {
send "exit\r"
log_user 1
send_user "\r\nCommand timed out\r\n"
exit 1
} -re "(#|%)\\s*$" {
# If we get here, then su succeeded
sleep 1
send "apachectl restart\r"
expect {
-re "(#|%)\\s*$" {
send "exit\r"
log_user 1
send_user "\r\nApache restart Successful\r\n"
exit 0
} timeout {
send "exit\r"
log_user 1
send_user "\r\nCommand timed out\r\n"
exit 1
}
}
}
}
Modifying your command to:
ssh -t $1 'sudo apachectl restart'
will open a TTY and allow sudo to interact with the remote system to ask for the user account's password without storing it locally in memory.
You could probably also modify your sudo config on the remote system to allow for execution without a password. You can do this by adding the following to /etc/sudoers (execute visudo and insert this line, substituting <username> appropriately.)
<username> ALL=NOPASSWD: ALL
Also, I'm a security guy and I really hope you understand the implication of allowing an SSH connection (presuming without a passphrase on the key) and remote command execution as root. If not, you should really, really, really rethink your setup here.
[Edit] Better still, edit your sudoers file to allow only apachectl to run without a password. Insert the following and modify %staff to match your user's group, or change that to your username without the percent sign.
%staff ALL=(ALL) NOPASSWD: /usr/sbin/apachectl
Then your command should simply be changed to:
ssh $1 'sudo apachectl restart'

Transitioning Expect script from Telnet to SSH only

Any help would be appreciated. I need to transition inherited expect scripts from Telnet to SSH only logins. First of, I'm a router guy that inherited all our expect script templates, written a while back. So far, with little modifications, they've been able to run smoothly. Our client wanted to move away from Telnet and so a few months ago we prepped all the Cisco routers and switches to support both Telnet and SSH. Until now, our scripts had been fine. However, Telnet support and servers will go away soon and I need to figure out how to reconfigure all the script templates to work in SSH only environments.
So, here's an example of a simple template to get a sh ver output:
#!/usr/local/bin/expect -f
#
set force_conservative 0 ;# set to 1 to force conservative mode even if
;# script wasn't run conservatively originally
if {$force_conservative} {
set send_slow {1 .1}
proc send {ignore arg} {
sleep .1
exp_send -s -- $arg
}
}
####################################################################
# Info for command line arguments
set argv [ concat "script" $argv ]
set router [ lindex $argv 1 ]
####################################################################
set timeout 15
set send_slow {1 .05}
spawn telnet $router
match_max 100000
expect Username:
sleep .1
send -s -- "user\r"
expect Password:
sleep .1
send -s -- "pass\r"
expect *
send -s -- "\r"
expect *
sleep .2
send -s -- "sh ver\r"
expect *
sleep .2
send -s -- "end\r"
expect *
sleep .2
send -s -- "wr\r"
expect *
sleep .2
send -s -- "exit\r"
expect *
sleep .2
expect eof
Instead of:
spawn telnet $router
match_max 100000
expect Username:
sleep .1
send -s -- "user\r"
expect Password:
sleep .1
send -s -- "pass\r"
Try to use:
spawn ssh -M username#$router
while 1 {
expect {
"no)?" {send "yes\r"}
"sername:" {send "username\r"}
"assword:" {send "password\r"}
">" {break}
"denied" {send_user "Can't login\r"; exit 1}
"refused" {send_user "Connection refused\r"; exit 2}
"failed" {send_user "Host exists. Check ssh_hosts file\r"; exit 3}
timeout {send_user "Timeout problem\r"; exit 4}
}
}