I tried to shutdown my SynologyNAS after a backup over SSH in a script, but it doesn't work. Before it tries to shutdown the NAS it checks if the NAS is available.
This is my script:
#!/bin/sh
export shutDown=false
while [ "$shutDown" = false ]
do
if ! ping -c1 NAS-IP &>/dev/null; then
echo "Ping Fail"
shutDown=true
else
echo "Host Found"
sshpass -p "NAS-Password" ssh -t root#NAS-IP 'shutdown -h now'
sleep 1m
fi
done
I want to shut it down because I only use it as a backup server. The Synology runs on a RAID with 2 HDDs so if one dies I just exchange it. With it doesn't work I mean it doesn't shut down the NAS
Related
I want ssh to forward the SIGTERM signal to the remote command.
ssh root#localhost /root/print-signal.py
Get PID of ssh:
ps aux| grep print-signal
Kill the matching ssh process:
kill pid-of-ssh
Unfortunately only the ssh process itself gets the signal, not the remote command (print-signal.py). The remote command does not terminate :-(
How can I make ssh "forward" the SIGTERM signal to the remote command?
I think you can do the following :
ssh root#localhost /root/print-signal.py
**Get PID of python file running **
ps aux| grep print-signal
Kill the matching ssh process:
ssh root#localhost "kill <pid>"
Here you are sending command to remote host .
Hope this solves your problem .
Send via SSH this (tested on sleep process):
$(proc=$(pidof sleep) && kill $proc)
I run autossh on a system which might have internet connectivity or might not. I don't really know when it has a connection but if so I want autossh to establish a ssh tunnel by:
autossh -M 2000 -i /etc/dropbear/id_rsa -R 5022:localhost:22 -R user#host.name -p 6022 -N
After several seconds it throws:
/usr/bin/ssh: Exited: Error resolving 'host.name' port '6022'. Name or service not known
And thats it. Isn't autossh meant to keep the ssh process running no matter what? Do I really have to check for a connection by ping or so?
You need to set the AUTOSSH_GATETIME environment variable to 0. From autossh(1):
Startup behaviour
If the ssh session fails with an exit status of 1 on the very first try, autossh
1. will assume that there is some problem with syntax or the connection setup,
and will exit rather than retrying;
2. There is a "starting gate" time. If the first ssh process fails within the
first few seconds of being started, autossh assumes that it never made it
"out of the starting gate", and exits. This is to handle initial failed
authentication, connection, etc. This time is 30 seconds by default, and can
be adjusted (see the AUTOSSH_GATETIME environment variable below). If
AUTOSSH_GATETIME is set to 0, then both behaviours are disabled: there is no
"starting gate", and autossh will restart even if ssh fails on the first run
with an exit status of 1. The "starting gate" time is also set to 0 when the
-f flag to autossh is used.
AUTOSSH_GATETIME
Specifies how long ssh must be up before we consider it a successful connecā
tion. The default is 30 seconds. Note that if AUTOSSH_GATETIME is set to 0,
then not only is the gatetime behaviour turned off, but autossh also ignores
the first run failure of ssh. This may be useful when running autossh at
boot.
Usage:
AUTOSSH_GATETIME=0 autossh -M 2000 -i /etc/dropbear/id_rsa -R 5022:localhost:22 -R user#host.name -p 6022 -N
In order to run Amplab's training exercises, I've create a keypair on us-east-1 , have installed the training scripts (git clone git://github.com/amplab/training-scripts.git -b ampcamp4) and created the env. variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY following the instructions in http://ampcamp.berkeley.edu/big-data-mini-course/launching-a-bdas-cluster-on-ec2.html
Now running
./spark-ec2 -i ~/.ssh/myspark.pem -r us-east-1 -k myspark --copy launch try1
generates the following messages:
johndoe#ip-some-instance:~/projects/spark/training-scripts$ ./spark-ec2 -i ~/.ssh/myspark.pem -r us-east-1 -k myspark --copy launch try1
Setting up security groups...
Searching for existing cluster try1...
Latest Spark AMI: ami-19474270
Launching instances...
Launched 5 slaves in us-east-1b, regid = r-0c5e5ee3
Launched master in us-east-1b, regid = r-316060de
Waiting for instances to start up...
Waiting 120 more seconds...
Copying SSH key /home/johndoe/.ssh/myspark.pem to master...
ssh: connect to host ec2-54-90-57-174.compute-1.amazonaws.com port 22: Connection refused
Error connecting to host Command 'ssh -t -o StrictHostKeyChecking=no -i /home/johndoe/.ssh/myspark.pem root#ec2-54-90-57-174.compute-1.amazonaws.com 'mkdir -p ~/.ssh'' returned non-zero exit status 255, sleeping 30
ssh: connect to host ec2-54-90-57-174.compute-1.amazonaws.com port 22: Connection refused
Error connecting to host Command 'ssh -t -o StrictHostKeyChecking=no -i /home/johndoe/.ssh/myspark.pem root#ec2-54-90-57-174.compute-1.amazonaws.com 'mkdir -p ~/.ssh'' returned non-zero exit status 255, sleeping 30
...
...
subprocess.CalledProcessError: Command 'ssh -t -o StrictHostKeyChecking=no -i /home/johndoe/.ssh/myspark.pem root#ec2-54-90-57-174.compute-1.amazonaws.com '/root/spark/bin/stop-all.sh'' returned non-zero exit status 127
where root#ec2-54-90-57-174.compute-1.amazonaws.com is the user & master instance. I've tried -u ec2-user and increasing -w all the way up to 600, but get the same error.
I can see the master and slave instances in us-east-1 when I log into the AWS console, and I can actually ssh into the Master instance from the 'local' ip-some-instance shell.
My understanding is that the spark-ec2 script takes care of defining the Master/Slave security groups (which ports are listened to and so on), and I shouldn't have to tweak these settings. This said, master and slaves all listen to post 22 (Port:22, Protocol:tcp, Source:0.0.0.0/0 in the ampcamp3-slaves/masters sec. groups).
I'm at a loss here, and would appreciate any pointers before I spend all my R&D funds on EC2 instances.... Thanks.
This is most likely caused by SSH taking a long time to start up on the instances, causing the 120 second timeout to expire before the machines could be logged into. You should be able to run
./spark-ec2 -i ~/.ssh/myspark.pem -r us-east-1 -k myspark --copy launch --resume try1
(with the --resume flag) to continue from where things left off without re-launching new instances. This issue will be fixed in Spark 1.2.0, where we have a new mechanism that intelligently checks the SSH status rather than relying on a fixed timeout. We're also addressing the root causes behind the long SSH startup delay by building new AMIs.
I'm trying to set up an automatic rsync backup (using cron) over an ssh tunnel but am getting an error "Connection to localhost closed by remote host.". I'm running Ubuntu 12.04. I've searched for help and tried many solutions such as adding ALL:ALL to /etc/hosts.allow, check for #MaxStartups 10:30:60 in sshd_config, setting UsePrivilegeSeparation no in sshd_config, creating /var/empty/sshd but none have fixed the problem.
I have autossh running to make sure the tunnel is always there:
autossh -M 25 -t -L 2222:destination.address.edu:22 pbeyersdorf#intermediate.address.edu -N -f
This seems to be running fine, and I've been able to use the tunnel for various rsync tasks, and in fact the first time I ran the following rsync task via cron it succeeded:
rsync -av --delete-after /tank/Documents/ peteman#10.0.1.5://Volumes/TowerBackup/tank/Documents/
with the status of each file and the output
sent 7331634 bytes received 88210 bytes 40215.96 bytes/sec
total size is 131944157313 speedup is 17782.61
Ever since that first success, every attempt gives me the following output
building file list ... Connection to localhost closed by remote host.
rsync: connection unexpectedly closed (8 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(605) [sender=3.0.9]
An rsync operation of a smaller subdirectory works as expected. I'd appreciate any ideas on what could be the problem.
It seems the issues is related to autossh. If I create my tunnel via ssh instead of autossh it works fine. I suspect I could tweak the environment variables that affect the autossh configuration, but for my purposes I've solved the problem by wrapping the rsycn command in a script that first opens a tunnel via ssh, executes the backup then kills the ssh tunnel, thereby eliminating the need for the always open tunnel created by autossh:
#!/bin/sh
#Start SSH tunnel
ssh -t -L 2222:destination.address.edu:22 pbeyersdorf#intermediate.address.edu -N -f
#execute backup commands
rsync -a /tank/Documents/ peteman#localhost://Volumes/TowerBackup/tank/Documents/ -e "ssh -p 2222"
#Kill SSH tunnel
pkill -f "ssh.*destination.address"
I have written a small bash script which needs an ssh tunnel to draw data from a remote server, so it prompts the user:
echo "Please open an ssh tunnel using 'ssh -L 6000:localhost:5432 example.com'"
I would like to check whether the user had opened this tunnel, and exit with an error message if no tunnel exist. Is there any way to query the ssh tunnel, i.e. check if the local port 6000 is really tunneled to that server?
Netcat is your friend:
nc -z localhost 6000 || echo "no tunnel open"
This is my test. Hope it is useful.
# $COMMAND is the command used to create the reverse ssh tunnel
COMMAND="ssh -p $SSH_PORT -q -N -R $REMOTE_HOST:$REMOTE_HTTP_PORT:localhost:80 $USER_NAME#$REMOTE_HOST"
# Is the tunnel up? Perform two tests:
# 1. Check for relevant process ($COMMAND)
pgrep -f -x "$COMMAND" > /dev/null 2>&1 || $COMMAND
# 2. Test tunnel by looking at "netstat" output on $REMOTE_HOST
ssh -p $SSH_PORT $USER_NAME#$REMOTE_HOST netstat -an | egrep "tcp.*:$REMOTE_HTTP_PORT.*LISTEN" \
> /dev/null 2>&1
if [ $? -ne 0 ] ; then
pkill -f -x "$COMMAND"
$COMMAND
fi
Autossh is best option - checking process is not working in all cases (e.g. zombie process, network related problems)
example:
autossh -M 2323 -c arcfour -f -N -L 8088:localhost:80 host2
This is really more of a serverfault-type question, but you can use netstat.
something like:
# netstat -lpnt | grep 6000 | grep ssh
This will tell you if there's an ssh process listening on the specified port. it will also tell you the PID of the process.
If you really want to double-check that the ssh process was started with the right options, you can then look up the process by PID in something like
# ps aux | grep PID
Use autossh. It's the tool that's meant for monitoring the ssh connection.
We can check using ps command
# ps -aux | grep ssh
Will show all shh service running and we can find the tunnel service listed
These are more detailed steps to test or troubleshoot an SSH tunnel. You can use some of them in a script. I'm adding this answer because I had to troubleshoot the link between two applications after they stopped working. Just grepping for the ssh process wasn't enough, as it was still there. And I couldn't use nc -z because that option wasn't available on my incantation of netcat.
Let's start from the beginning. Assume there is a machine, which will be called local with IP address 10.0.0.1 and another, called remote, at 10.0.3.12. I will prepend these hostnames, to the commands below, so it's obvious where they're being executed.
The goal is to create a tunnel that will forward TCP traffic from the loopback address on the remote machine on port 123 to the local machine on port 456. This can be done with the following command, on the local machine:
local:~# ssh -N -R 123:127.0.0.1:456 10.0.3.12
To check that the process is running, we can do:
local:~# ps aux | grep ssh
If you see the command in the output, we can proceed. Otherwise, check that the SSH key is installed in the remote. Note that excluding the username before the remote IP, makes ssh use the current username.
Next, we want to check that the tunnel is open on the remote:
remote:~# netstat | grep 10.0.0.1
We should get an output similar to this:
tcp 0 0 10.0.3.12:ssh 10.0.0.1:45988 ESTABLISHED
Would be nice to actually see some data going through from the remote to the host. This is where netcat comes in. On CentOS it can be installed with yum install nc.
First, open a listening port on the local machine:
local:~# nc -l 127.0.0.1:456
Then make a connection on the remote:
remote:~# nc 127.0.0.1 123
If you open a second terminal to the local machine, you can see the connection. Something like this:
local:~# netstat | grep 456
tcp 0 0 localhost.localdom:456 localhost.localdo:33826 ESTABLISHED
tcp 0 0 localhost.localdo:33826 localhost.localdom:456 ESTABLISHED
Better still, go ahead and type something on the remote:
remote:~# nc 127.0.0.1 8888
Hallo?
anyone there?
You should see this being mirrored on the local terminal:
local:~# nc -l 127.0.0.1:456
Hallo?
anyone there?
The tunnel is working! But what if you have an application, called appname, which is supposed to be listening on port 456 on the local machine? Terminate nc on both sides then run your application. You can check that it's listening on the correct port with this:
local:~# netstat -tulpn | grep LISTEN | grep appname
tcp 0 0 127.0.0.1:456 0.0.0.0:* LISTEN 2964/appname
By the way, running the same command on the remote should show sshd listening on port 127.0.0.1:123.
#!/bin/bash
# Check do we have tunnel to example.com server
lsof -i tcp#localhost:6000 > /dev/null
# If exit code wasn't 0 then tunnel doesn't exist.
if [ $? -eq 1 ]
then
echo ' > You missing ssh tunnel. Creating one..'
ssh -L 6000:localhost:5432 example.com
fi
echo ' > DO YOUR STUFF < '
stunnel is a good tool to make semi-permanent connections between hosts.
http://www.stunnel.org/
If you are using ssh in background, use this:
sudo lsof -i -n | egrep '\<ssh\>'