Detect all processes are running in monit - monit

I'd like to run a script from monit after the status of all files/processes/file systems are accessible/running. Outside of writing a script to parse the output of monit status and monit summary, is there some functionality that would allow me to do this?

The way I've decided to accomplish this is run the script outside of monit and use the monit validate utility. This can be run whether or not the monit daemon is running and its exit status $? is 1 if it does nothing and 0 if it detects something is not running/accessible. It can also be run even if the monit daemon is not accessible from the command line (i.e. if you're not allowing localhost) or monit is not running in daemon mode.

Related

Ansible playbook stops after loosing connection (even for few seconds) with ssh window of VM on which it is running?

My ansible playbook consist several task in it and I am running my ansible playbook on Virtual Machine. I am using ssh method to log in to VM and run the playbook. if my ssh window gets closed during the execution of any task (when internet connection is not stable and not reliable), the execution of ansible playbook stops as the ssh window already got closed.
It takes around 1 hour for My play book to run, and sometimes even if I loose internet connectivity for few seconds , the ssh terminal lost its connection and thus entire playbook stops. any idea how to make ansible script more redundant to avoid this problem ?
Thanks in advance !!
If you need to run a job on an external system that hangs for a long time and it is relevant that the task completes. It is extremly bad idea to run that job in the foreground.
It is not important that the task is Ansible or the connection is SSH. In every case you would always just "push" the command to the remote host and send it to background with something like "nohup" if available. The problem is of course the tree of processes. Your connection creates a process on the remote system and that creates the job you want to run. Is the connection gets lost, al subprocesses will be killed automatically by the OS.
So - under Windows - maybe use RDP to open a screen that stays available even after connection is lost or use something like Cygwin and nohup via SSH to change the hung up your process from the ssh session.
Or - when you need to run a playbook on that system install for example a AWX container and use that. There are many options based on your requirements, resources and administrative options.

Disconnecting (SSH) from Google Compute Engine stops the running API

Any idea why when I connect remotely (ssh session) to my Google Compute Engine instance, if I run a command (run an HTTP API) and leave, this one stops running as well?
./main PORT // Stops when I leave
./main PORT & // Stops when I leave as well..
No matter what, if I disconnect from my current ssh session, my API stops, even if the engine still seems to run fine
When you disconnect your terminal, all processes started by that terminal are sent a "hangup" signal which, by default, causes the process to terminate. You can trap the hangup signal when you launch a process at cause the signal to be silently ignored. The easiest way to achieve this is with the nohup command. For example:
nohup ./main PORT &
References:
What Is Nohup and How Do You Use It?
Unix Nohup: Run a Command or Shell-Script Even after You Logout
nohup(1) - Linux man page

How to kill a single apache connection without a server restart?

Using the apache status module, you can see what connections are currently connected through apache. Without restarting the apache service, I would like to kill some of those connections.
A line from the status looks something like:
Srv PID ...
2-0 3326 ...
What is the best way to kill just one of these connections?
Can one, with impunity, in a shell just kill the PID shown from apache status?
Will this harm apache in some way if some of its child processes are manually killed?
Will it still be able to respawn new processes correctly?
Any strange side effects one should be aware of?
After having done this with impunity for a while, it appears that killing the process by using the PID given from apache status indeed is an effective and safe (at least as far as I can tell) way to kill an individual connection and keep the server alive.

Monit - stop service and stay stopped?

I have a daemon which runs via the usual init.d/service scripts.
I have monit running which ensures these daemons are restarted if they crash.
I have a request that 'service foo stop' should stop the deamon, and because it was explicitly stopped, not a crash, monit should not restart it. How can I achieve this with monit?
I could have the service script's stop() routine call 'monit unmonitor' but this seems circular and wrong.
Thanks,
Dave
I think you should use monit stop foo instead of service foo stop. That way Monit is aware that the service didn't crash -- and won't restart it.
There is a MODE param for that:
Monit supports three monitoring modes per service: active, passive and manual.
Syntax:
MODE
In active mode (the default), Monit will pro-actively monitor a service and in case of problems raise alerts and/or restart the service.
In passive mode, Monit will passively monitor a service and will raise alerts, but will not try to fix a problem by executing start, stop or restart.
In manual mode, Monit will enter active mode only if a service was started via Monit
From here: https://mmonit.com/monit/documentation/monit.html#SERVICE-MONITORING-MODE
This way if you manage services via runit or upstart and just want to use monit for alerts and dashboards you simply set for all such services mode to passive.
For example:
check process heka with pidfile /etc/sv/myservice/supervise/pid
start program = "/usr/bin/sv start myservice"
stop program = "/usr/bin/sv stop myservice"
mode passive
If you need to enable/disable that online but not permanently -- please refer to other people's answers, they are fine.
The model is:
Monit runs as a service by init.d and therefore controlled (stop/start/restart) by init.d . (Others, please me if I am wrong).
Applications that require to be monitored are handled by monit.
Therefore, such applications should be only controlled i.e. stop/start/restart via monit.
monit
SET ONREBOOT LASTSTATE
As per: https://mmonit.com/monit/documentation/monit.html#SYSTEM-REBOOT-AND-SERVICE-STARTUP

Trying to start redis and resque scheduler within a rake task

I want to start redis and redis-scheduler from a rake task so I'm doing the following:
namespace :raketask do
task :start do
system("QUEUE=* rake resque:work &")
system("rake redis:start")
system("rake resque:scheduler")
end
end
The problem is the redis starts in the foreground and then this never kicks off the scheduler. If It won't start in the background (using &). Scheduler must be started AFTER redis is up and running.
similar to nirvdrum. The resque workers are going to fail/quit if redis isn't already running and accepting connections.
check out this gist for an example of how to get things started with monit (linux stuff).
Monit allows one service to be dependent on another, and makes sure they stay alive by monitoring a .pid file.
That strikes me as not a great idea. You should have your redis server started via an init script or something. But, if you really want to go this way, you probably need to modify your redis:start task to use nohup and background the process so you can disconnect from the TTY and keep the process running.