release scp connection when no response from server - ssh

I have to collect measurement files from different servers, so I used scp command to retrieve them.
But in case the distant server is hanged or no response, I need to close the connection and put a 0 in my measurement file.
Is there any option in scp command allow me to close the connection after 10 seconds for example?
for serv in $SERV_LIST
do
echo "--- Working on server: $serv ---"
trc_file=`ssh user#$serv "$(typeset -f collectSTATS); collectSTATS $serv $DATE $LastRunTime
scp user#$serv:/tmp/result_rechHM2_$serv.tmp /home/voms/HDB2/result_rechHM2_$serv.tmp > /dev/null 2>&1
deleteFile=`ssh voms#$serv "rm /tmp/result_rechHM2_$serv.tmp 2> /dev/null"`
if [ -f /home/voms/HDB2/result_rechHM2_* ]
then
cat /home/voms/HDB2/result_rechHM2_* >> /home/voms/HDB2/TraceRecharge.log
rm -rf /home/voms/HDB2/result_rechHM2_*
fi
done
When ssh or scp command fail with no response, I need to wait only 10 seconds.

we just use the ssh -o ConnectTimeout=5.
it resolve my problem

Related

Running a script when connecting to server with ssh

I use the kitty terminal emulator, so when I connect to a new server, I (usually) need to ad the terminfo (at least, this way it seems to work). To do this I wrote a script. While I was at it, I added a bit of code to add a public key if the user wants it to.
Not really relevant for the question, but here is the code:
#!/bin/bash
host=$1
ip=$(echo $host | cut -d# -f2 | cut -d: -f1)
# Check if it is a unknown host
if [[ -z $(ssh-keygen -F $ip) ]]; then
# Check if there are any ssh-keys
if [ $(ls $HOME/.ssh/*.pub > /dev/null | wc -l) -ne 0 ]; then
keys=$(echo $( (cd $HOME/.ssh/ && ls *.pub) | sed "s/.pub//g" ))
ssh -q -o PubkeyAuthentication=yes -o PasswordAuthentication=no $host "ls > /dev/null 2>&1"
# Check if the server has one of the public keys
if [ $? -ne 0 ]; then
echo "Do you want to add a SSh key to the server?"
while true; do
read -p " Choose [$keys] or leave empty to skip: " key
if [[ -z $key ]]; then
break
elif [[ -e $HOME/.ssh/$key ]]; then
# Give the server a public key
ssh $host "mkdir -p ~/.ssh && chmod 700 ~/.ssh && echo \"$(cat $HOME/.ssh/$key.pub)\" >> ~/.ssh/authorized_keys"
break
else
echo "No key with the name \"$key\" found."
fi
done
fi
fi
# Copy terminfo to server
ssh -t $host "echo \"$(infocmp -x)\" > \"\$TERM.info\" && tic -x \"\$TERM.info\" && rm \$TERM.info"
fi
It is not the best code, but it seems to work. Tips are ofcourse welcome.
The problem is that I need to run this script every time I connect te a new remote server (or I need to keep track of which server is new, but that is even worse). Is there a way to run this script every time I connect to a server (the script checks if the ip is a known host).
Or is there an other way to do this? Adding the public keys is nice to have, but not very important.
I hope somone can help,
Thanks!
There is a trick to identify that you are using ssh to login on the target machine:
pgrep -af "sshd.*"$USER |wc -l
The above command will count the user's processes using sshd
You can add the above command in the target machine, to test if you are connected via ssh. Add the above command to your .profile or .bash_profile script in the target machine.
So that only if you login via ssh your script will run initiation script on the target machine when you login/connect.
Sample .bash_profile on target machine
#!/bin/bash
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
if [[ $(pgrep -af "sshd.*"$USER |wc -l) -gt 0 ]]; then
your_init_script
fi

ssh on remote server with nohup - not able to create files

I'm calling a shell script on a remote server with ssh ssh user#host '/dir/myscript.sh'and myscript.sh then runs another script with nohup, but the 2nd script isn't writing to nohup / to any other file with echo "" > test.txt.
myscript.sh
#!/bin/bash
shopt -s expand_aliases
source /home/user/.bash_profile
nohup /dir/myscript-real.sh > my.out 2> my.err < /dev/null &
myscript-real.sh
#!/bin/bash
shopt -s expand_aliases
source /home/user/.bash_profile
echo -e "test"
sleep 15
echo "2nd test"
echo "test" > test.out
exit
When ran via SSH - no file is being created (my.out, my.err or test.out), but when running myscript.sh directly - those 3 files are being created.
Any ideas where is the problem?
Thanks.

Streaming stdout from remote shell call

I have a read-only remote filesystem that stores logs.
I use ssh -t to run grep queries on these logs. Sometimes, the queries can take too long and cause the ssh to timeout.
Is there some way to stream the stdout back and keep ssh connection alive?
Example command:
ssh -t my-host.com "cd /path/to/my/folder ; find ./ -name '*' -print0 | xargs -0 -n1 -P8 zgrep -B 5 -H 'My search string'" > search_result.txt
Thanks

How to enable sshpass output to console

Using scp and interactively entering the password the file copy progress is sent to the console but there is no console output when using sshpass in a script to scp files.
$ sshpass -p [password] scp [file] root#[ip]:/[dir]
It seems sshpass is suppressing or hiding the console output of scp. Is there a way to enable the sshpass scp output to console?
After
sudo apt-get install expect
the file send-files.exp works as desired:
#!/usr/bin/expect -f
spawn scp -r $FILES $DEST
match_max 100000
expect "*?assword:*"
send -- "12345\r"
expect eof
Not exactly what was desired, but better than silence:
SSHPASS="12345" sshpass -e scp -v -r $FILES $DEST 2>&1 | grep -v debug1
Note that -e is considered a bit safer than -p.
Output:
Executing: program /usr/bin/ssh host servername, user username, command scp -v -t /src/path/dst_file.txt
OpenSSH_6.6.1, OpenSSL 1.0.1i-fips 6 Aug 2014
Authenticated to servername ([10.11.12.13]:22).
Sending file modes: C0600 590493 src_file.txt
Sink: C0600 590493 src_file.txt
Transferred: sent 594696, received 2600 bytes, in 0.1 seconds
Bytes per second: sent 8920671.8, received 39001.0
In this way:
output=$(sshpass -p $PASSWD scp -v $filename root#192.168.8.1:/root 2>&1)
echo "Output = $output"
you redirect the console output in variable output.
Or, if you only want to see the console output of scp command, you should add only -v command in your ssh pass cmd:
sshpass -p $PASSWD scp -v $filename root#192.168.8.1:/root

Running ssh command and keeping connection

Is there a way to execute a command before accessing a remote terminal
When I enter this command:
bash
$> ssh user#server.com 'ls'
The ls command is executed on the remote computer but ssh quits and I cannot continue in my remote session.
Is there a way of keeping the connection? The reason that I am asking this is that I want to create a setup for ssh session without having to modify the remote .bashrc file.
I would force the allocation of a pseudo tty and then run bash after the ls command:
syzdek#host1$ ssh -t host2.example.com 'ls -l /dev/null; bash'
-rwxrwxrwx 1 root other 27 Apr 1 2005 /dev/null
bash-4.1$
You can try using process subsitution on the init file of bash. In the example below, I define a function myfunc:
myfunc () {
echo "Running myfunc"
}
which I transform to a properly-escaped one-liner echoed in the <(...) construct for process subsitution for the --init-file argument of bash:
$ ssh -t localhost 'bash --init-file <( echo "myfunc() { echo \"Running myfunc\" ; }" ) '
Password:
bash-3.2$ myfunc
Running myfunc
bash-3.2$ exit
Note that once connected, my .bashrc is not sourced but myfunc is defined and properly usable in an interactive session.
It might prove a little difficult for more complex bash functions, but it works.