Ansible: How to suppress SIGHUP - ssh

Ansible seems to be sending SIGHUP signals at the end of (certain?) tasks. This is a problem as the tasks call a bash script which in turn starts a server instance.
Now if the closing of Ansibles SSH session sends a SIGHUP, this will actually shutdown the server - the start of which was the key point of the Ansible task in the first place.
So, is there a way I can guarantee that Ansible will use SSH in a way that will not send a SIGHUP signal when closing the task/session?
I could theoretically start the bashscript using nohup. But this seems to be just a dirty workaround, as I know that SSH is capable of doing what I want: if I manually compose and pass the script command via SSH as such:
ssh user#server "scriptToStartMyServer.sh params"
.. then it works fine and no SIGHUP is sent (thus the server survives and is not shutdown immediately after being started).
Edit:
Sadly we cannot avoid using these bash scripts to start servers and we cannot really change them as this is something given to us by the customer.

Related

How does running ssh with a command string work behind the scene?

I have some difficulities on understanding how ssh works behind the scene when I run it with a command string.
Normally, I type ssh rick#1.2.3.4 then I am logged into the remote machine and run some commands. If I don't use nohup or disown, once I close the session, all running processes started by ssh will stop. That's the ordinary case.
But if I run ssh with a command string, things become different. The process started by ssh won't stop if I close the session.
For example:
Run from local command line: ssh rick#1.2.3.4 "while true; do echo \"123test\" >> sshtest.txt"; done
Run a remote script ssh rick#1.2.3.4 './remoteScript_whichDoTheSameAsAbove.sh'
After closing the session by Ctrl + C or kill pid on the local machine, on the remote machine I see the process still running with ps -ef .
So my question is, could someone make a short introduction on how ssh works when I run it with a command string like above?
Also, I get very confused when I see these 2 related questions during searching:
Q1: Getting ssh to execute a command in the background on target machine . What is this question asking for? Isn't ssh rick#1.2.3.4 'some command' already run as a seperate shell/pts? I don't understand what "in the background" is for.
Q2: Keep processes running after SSH session disconnects Isn't simply running a remote script meets his requirement? ssh rick#1.2.3.4 "./remoteScript.sh. I get very confused when I see so many "magic" answers under that question.

Can Ansible task provisioned from Ansible run on a remote host without SSH?

Here is the problem statement:
Suppose I am on an EC2 instance A, and run an Ansible script which does the following tasks:
1. Create an EC2 instance B
2. SSH into it
3. Trigger an Ansible script which is on B, with the simple `ansible-playbook <pb_on_B>.yml` [B is being provisioned from an AMI]
So, what will happen if the instance A gets terminated after task 3 gets started?
Will the Ansible script which is triggered in B, finish to completion?
[W]hat will happen if the instance A gets terminated after task 3 gets started?
Will the Ansible script which is triggered in B, finish to completion?
You can't tell what would happen with 100% certainty.
It depends on the shell configuration (for example TMOUT in bash), SSH daemon configuration (TCPKeepAlive, ClientAliveInterval parameters), timing, network conditions and whether A will close the session with (FIN) or drop without notifying A.
Most likely the playbook execution would get interrupted.
If SSH daemon on B cannot contact the SSH client on A (for example to print out Ansible execution log) and it gets the TCP RST packet, it will drop the session killing the SSH session's child processes, including the shell and ansible-playbook. However the session might also remain active until timeout and the playbook might finish before it occurs.
If ansible-playbook executable was be called through the nohup command (or in a screen or tmux session), it won't be interrupted upon SSH session disconnect (and shell session closure).
Note: when you use nohup the standard output will be redirected to a file nohup.out. Refer to the answers under this question to learn the options.
Also check this answer on Unix.SE which describes the technicalities behind the command.
Can Ansible task provisioned from Ansible run on a remote host without SSH?
Yes, with ansible-pull:
Should you want to invert the architecture of Ansible, so that nodes check in to a central location, instead of pushing configuration out to them, you can.
The ansible-pull is a small script that will checkout a repo of configuration instructions from git, and then run ansible-playbook against that content.
If the idea is to preserve an SSH session in instance B, without worrying about the life/death of instance A, you could try and run your ansible plays in tmux on instance B. Your workflow will be modified like this
Create an EC2 instance B
SSH into it
Instal tmux - apt-get install tmux
Start a tmux session tmux new -s ansible
Trigger an Ansible script which is on B, with the simple ansible-playbook <pb_on_B>.yml

Rundeck - reboot server job

I have a rundeck job that reboots a server, it sends the command "sudo reboot". This works and the server is rebooting.
The problem is that rundeck doesn't get a signal back so the job fails.
Is there a way to make this work and get a complete signal back in rundeck?
Perhaps wrap your command in a script, background the reboot operation, and return 0? I'm doing something similar with a set of development VMs, but I'm using virsh. I don't see why this couldn't be done with a physical server:
#!/bin/bash
ssh rundeck#yourserver sudo reboot &
exit 0
You may need to experiment a bit with the ssh options (perhaps '-f' and/or '-n') to get this to work properly.
Well playing around now I just used as Local Command step:
ssh ${node.username}#${node.hostname} "reboot & exit"
The return code is ZERO and everybody is happy.

AWS process launched from SSH terminate when SSH hangs up

I use SSH to connect to my AWS EC2 instances and run code that takes a long time to complete. I find that if my local computer sleeps (or even if I leave it unattended for a bit) the SSH connection hangs up (which is not fatal in itself) but this seems to terminate the code on the EC2 instance that I launched using SSH.
Also, I use SSH to locally monitor the exception of my remote code, so even if there's a way to tell the remote process to stay alive after SSH has gone, I still need a way to locally see the output of the process as it continues to run (without SSH).
How do I keep code running on my AWS EC2 instance after SSH has hung up; how can I monitor the output of such a process?
When you close your tty (ssh close in your case) your process gets a SIGHUP and the default action on SIGHUP is to terminate. To avoid that you can use the command nohup to trap and not send the SIGHUP to your command, or trap the SIGHUP in your code and ignore it.
There are a bunch of ways to track a background process, but perhaps the easiest is to have it write to a file and in that other ssh you can read that file. If your process is really a command on the command line you can redirect its standard output and standard error to a file. When such a file keeps getting new content, it may be annoying to keep reading it to refresh, in which case the command "tail -f" handy.
Here is how you can config your ssh connection to stay alive :
vi ~/.ssh/config # on your client side
add this line to engage sending a "null packet" every 120 seconds :
ServerAliveInterval 120
If you own the server side do a similar change :
vi /etc/ssh/sshd_config
add these lines at bottom of config file
ClientAliveInterval 120
ClientAliveCountMax 720
this is for linux YMMV on other OS settings
Use screen
local> ssh ...
remote> screen
remote+screen> python long_running.py ...
You can then detach from screen and even disconnect from SSH, and when you return by SSHing back in again, you can
remote> screen -r
to reconnect to your running code.

How to create repeating rsync that runs on one password input?

I am wanting to rsync files from my home computer to a cloud server. I am able to set up my continuous rsync with the following:
#!/bin/bash
while :
do
rsync -rav * --include=*.bz2 --exclude=*.* --exclude=ZIP.sh --exclude=UPLOAD.sh
--chmod=a+rwx user#server.com:/home/user/date
sleep 180
done
This of course will run continuously if i set up a keygen as here states. I want to run the rsync continuously with me entering the password the first time and after that it will continuously run until I push CTRL+C. Is there a way to do this?
Yes, using SSH connection sharing:
Add this top your ~/.ssh/config file:
ControlMaster auto
ControlPath /tmp/ssh_%r#%n:%p
ControlPersist 8h
Connection sharing means that all your SSH connections to the same server will share the same connection. This means you can skip the authentication process for all but the first connection. The ControlPersist setting controls how long the connection will idle for before being closed (8 hours means I can login in the morning, and the connection will still be active at the end of the day, but will have expired before the next day).
The ControlPath specifies where the cached sockets will live. It can be anywhere you like, and they can be called anything you like, but the /tmp directory will do fine, and the name must be unique to each user, server, and port you wish to use, or else you'll get clashes.
Incidentally, you should probably check out the lsyncd tool as an alternative to continuous active scanning. It uses kernel notifications to watch the file-system, and launches rsync only when something actually changes.