Does Monit support multiple email recipients with the same format? - monit

I use monit to monitor the status of a service,when the service is down, I want to send alert email to multiple recipients with the same format.
here is part of my monit configuration:
set mail-format { from: no-reply#gmail.com }
check host hostA with address hostA
alert userA#gmail.com
MAIL-FORMAT { # use local format
subject: redis is down on hostA
message: redis is down on hostA on port 6379
Yours sincerely,
monit
}
alert userB#gmail.com
MAIL-FORMAT { # use local format
subject: redis is down on hostA
message: redis is down on hostA on port 6379
Yours sincerely,
monit
}
if failed port 6379 retry 3 then exec "/monit/scripts/myscripts.sh"
it works but have some redundant things (e.g. the same MAIL-FORMAT for 2 users)
. there will be multiple email formats I will use in the same configuration file.
Dose monit support multiple recipients with the same and only one local email format?

From the official documentation:
If you want to send alert messages to more email addresses, add a set
alert 'email' statement for each address.
Following this, the following configuration should work:
set mail-format { from: no-reply#gmail.com }
check host hostA with address hostA
alert userA#gmail.com
alert userB#gmail.com
MAIL-FORMAT { # use local format
subject: redis is down on hostA
message: redis is down on hostA on port 6379
Yours sincerely,
monit
}
Regards
EDIT:
The documentation does not state it, but the position of the elements might play a role here. Try to define the alert addresses AFTER the mail-format declaration. Also, note you use two mail-format declaration in your code - If this is not necessary, try to use only one (the latter).
set mail-format {
from: monit#foo.bar
reply-to: support#domain.com
subject: $SERVICE $EVENT at $DATE
message: Monit $ACTION $SERVICE at $DATE on $HOST: $DESCRIPTION.
Yours sincerely,
monit
}
alert userA#gmail.com
alert userB#gmail.com

Related

How do you set up Caddy Server to work within WSL?

Context
I'm trying to learn how to use secure cookies using a very simple nodejs server that returns basic html over localhost:3000. I was told I would need to set up a reverse proxy in order to get SSL working so you can best have your development environment mimic that of what production would be like. I realize that I can learn how secure cookies work over localhost without a reverse proxy, but I wanted the challenge and to learn how to get SSL set up in a development environment.
Setup
I personally prefer to develop in WSL (WSL 2, Ubuntu-20.04) so naturally I set up the node server in WSL along with Caddy Server with the following configuration provided by the levelup tutorials course I'm using to teach me about web authentication. I ran Caddy Server by running caddy run in the directory I had the following file.
{
local_certs
}
nodeauth.dev {
reverse_proxy 127.0.0.1:3000
}
The following were the startup logs for the Caddy Server
2021/07/01 00:53:18.253 INFO using adjacent Caddyfile
2021/07/01 00:53:18.256 WARN input is not formatted with 'caddy fmt' {"adapter": "caddyfile", "file": "Caddyfile", "line": 2}
2021/07/01 00:53:18.258 INFO admin admin endpoint started {"address": "tcp/localhost:2019", "enforce_origin": false, "origins": ["localhost:2019", "[::1]:2019", "127.0.0.1:2019"]}
2021/07/01 00:53:18.262 INFO tls.cache.maintenance started background certificate maintenance {"cache": "0xc0003c5260"}
2021/07/01 00:53:18.281 INFO http server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS {"server_name": "srv0", "https_port": 443}
2021/07/01 00:53:18.281 INFO http enabling automatic HTTP->HTTPS redirects {"server_name": "srv0"}
2021/07/01 00:53:18.450 INFO pki.ca.local root certificate is already trusted by system {"path": "storage:pki/authorities/local/root.crt"}
2021/07/01 00:53:18.451 INFO http enabling automatic TLS certificate management {"domains": ["nodeauth.dev"]}
2021/07/01 00:53:18.452 INFO tls cleaning storage unit {"description": "FileStorage:/home/rtclements/.local/share/caddy"}
2021/07/01 00:53:18.454 INFO tls finished cleaning storage units
2021/07/01 00:53:18.454 WARN tls stapling OCSP {"error": "no OCSP stapling for [nodeauth.dev]: no OCSP server specified in certificate"}
2021/07/01 00:53:18.456 INFO autosaved config (load with --resume flag) {"file": "/home/rtclements/.config/caddy/autosave.json"}
2021/07/01 00:53:18.456 INFO serving initial configuration
I also added 127.0.0.1 nodeauth.dev to the hosts file in WSL at /etc/hosts. Below is the resulting file.
# This file was automatically generated by WSL. To stop automatic generation of this file, add the following entry to /etc/wsl.conf:
# [network]
# generateHosts = false
127.0.0.1 localhost
127.0.1.1 MSI.localdomain MSI
192.168.99.100 docker
127.0.0.1 nodeauth.dev
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
Problem
Interestingly enough, I was able to hit the node server via my browser by navigating to localhost:3000 (as expected), and was able to hit the Caddy Server by navigating to localhost:2019. The following was the log outputted by Caddy Server when I hit it.
2021/07/01 00:53:32.224 INFO admin.api received request {"method": "GET", "host": "localhost:2019", "uri": "/", "remote_addr": "127.0.0.1:34390", "headers": {"Accept":["*/*"],"User-Agent":["curl/7.68.0"]}}
I am not, however, able to see the html from my node server in my browser by navigating to nodeauth.dev. Neither am I seeing any output from running curl nodeauth.dev in my console in WSL, whereas I get the expected output when I run curl localhost:3000 also in WSL. Why is this?
I figured it had to do something with the hosts file on Windows not including this configuration. So I tried modifying that file to look like this, but I still couldn't get it to work.
# Copyright (c) 1993-2009 Microsoft Corp.
#
# This is a sample HOSTS file used by Microsoft TCP/IP for Windows.
#
# This file contains the mappings of IP addresses to host names. Each
# entry should be kept on an individual line. The IP address should
# be placed in the first column followed by the corresponding host name.
# The IP address and the host name should be separated by at least one
# space.
#
# Additionally, comments (such as these) may be inserted on individual
# lines or following the machine name denoted by a '#' symbol.
#
# For example:
#
# 102.54.94.97 rhino.acme.com # source server
# 38.25.63.10 x.acme.com # x client host
# localhost name resolution is handled within DNS itself.
# 127.0.0.1 localhost
# ::1 localhost
192.168.99.100 docker
127.0.0.1 nodeauth.dev
I tried running a powershell script that I barely understand that I found from here, but that didn't work either, and I was no longer able to access localhost:3000. I'm guessing it does some form of port forwarding. Code below.
If (-NOT ([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator"))
{
$arguments = "& '" + $myinvocation.mycommand.definition + "'"
Start-Process powershell -Verb runAs -ArgumentList $arguments
Break
}
# create the firewall rule to let in 443/80
if( -not ( get-netfirewallrule -displayname web -ea 0 )) {
new-netfirewallrule -name web -displayname web -enabled true -profile any -action allow -localport 80,443 -protocol tcp
}
$remoteport = bash.exe -c "ifconfig eth0 | grep 'inet '"
$found = $remoteport -match '\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}';
if( $found ){
$remoteport = $matches[0];
} else{
echo "The Script Exited, the ip address of WSL 2 cannot be found";
exit;
}
$ports=#(80,443);
iex "netsh interface portproxy reset";
for( $i = 0; $i -lt $ports.length; $i++ ){
$port = $ports[$i];
iex "netsh interface portproxy add v4tov4 listenport=$port connectport=$port connectaddress=$remoteport";
}
iex "netsh interface portproxy show v4tov4";
The only thing that worked was when I ran Caddy Server on Windows with the same configuration and changes to the hosts files as shown before. I'm not terribly sure what's going on here, but would any of y'all happen to know?

How can I create a service name and password in redis-sentinel

I am getting this error when I run celery beat -S redbeat.RedBeatScheduler.
beat raised exception : ConnectionError('Error -2 connecting to redis-sentinel:26379. Name or service not known.',)
How can I create a service_name and password in redis-sentinel
I am not trying to use redis as a message broker. I am using celery-redbeat to store celerybeat data in redis-sentinel cluster from this page.https://pypi.org/project/celery-redbeat/
and
from this configuration
redbeat_redis_url = 'redis-sentinel://redis-sentinel:26379/0'
redbeat_redis_options = {
'sentinels': [('192.168.1.1', 26379),
('192.168.1.2', 26379),
('192.168.1.3', 26379)],
'socket_timeout': 0.1,
}
I add 192.168.1.1:26379 instead of redis-sentinel:/26379 but when master node down in redis-sentinel cluster beat is down too.
redbeat_redis_url = 'redis-sentinel://192.168.1.1:26379/0'
redbeat_redis_options = {
'sentinels': [('192.168.1.2', 26379),
('192.168.1.3', 26379)],
'socket_timeout': 0.1,
}
Unless you have redis-sentinel in your /etc/hosts file it will not be able to resolve it to a correct IP address. You may try to replace redis-sentinel with an IP address of your Redis server. Furthermore, it does not look like a proper Redis Sentinel configuration. Redis Configuration section explains how to connect to Redis Sentinel, please read it.

RabbitMQ with MQTT sends message on connect

I'm using mqtt-launcher (https://github.com/jpmens/mqtt-launcher) to execute commands when a certain MQTT message with the payload "0" was received.
Here is the config
logfile = '/home/user/mqtt-launcher/logfile'
mqtt_broker = 'broker' # default: 'localhost'. If using TLS, this must be set to the domain name signed by$
mqtt_port = 1883 # default: 1883
mqtt_clientid = 'mqtt-launcher-1'
mqtt_username = ''
mqtt_password = ''
mqtt_tls = None # default: No TLS
topiclist = {
# topic payload value program & arguments
"channel/dostuff" : {
'0' : [
'/usr/bin/ssh',
'-i',
'/home/user/.ssh/privatekey',
'user#host',
'script.sh'
]
}
}
Everytime, I start the python script, the shell script is executed twice.
But I want it to execute only once if the MQTT message with the payload "0" is sent.
I made sure the queue which is implicitly created when subscribing was empty before by purging it, then starting mqtt-launcher but still the script is execute twice after the program connected.
When I run user#localhost:~$ mosquitto_sub -h broker -p 1883 -t 'channel/dostuff' -v -u 'user' -P 'mysecurepassword' I get channel/dostuff 0
I'm not familiar with mosquitto but I think that this means I receive a message, right?
I turned of the retain option, restarted openHAB and RabbitMQ, but still the message is sent. Here is the openHAB mqtt.cfg:
broker.url=tcp://broker:1883
broker.user=openhab
broker.pwd=mysecurepassword
broker.qos=1
broker.retain=false
broker.async=false
You have published a message with the payload 0 and the retained bit set.
This means that when ever a client subscribes to that topic the last message with the retained bit set will be delivered to that client.
You can clear the retained message by publishing a message with the retained bit set and a null payload to the same topic. You can do this with the mosquitto_pub command as follows:
mosquitto_pub -t "channel/dostuff" -u 'user' -P 'password' -r -n
You should make sure what ever you are using to publish the message normally is not setting the retained bit.

Icinga2: Run check on remote host instead of master

I just updated to Icing 2.8 which needs "the new" way of checking remote hosts, so I'm trying to get that to work.
On the master I added a folder in zones.d with the hostname of the remote host. I added a few checks but they all seem to be executed from the master instead of the remote.
For example: I need to monitor Redis. My redis.conf in /etc/icinga2/zones.d/remotehostname/redis.conf:
apply Service "Redis" {
import "generic-service"
check_command = "Redis"
vars.notification["pushover"] = {
groups = [ "ADMINS" ]
}
assign where host.name == "remotehostname"
}
A new service pops up in IcingaWeb but it errors out with:
execvpe(/usr/lib/nagios/nagios-plugins/check_redis_publish_subscribe.pl) failed: No such file or directory
Which is correct because on the master that file does not exist. It does exist on the remote host however.
How do I get Icinga to execute this on the remote host and have that host return the output to the master?
You can write this to the service:
command_endpoint = host.name
Or you can try to create a zone and add the zone to the Host.
Maybe this could help you:
NetWays Blog

How Do I Connect to a Redis Sentinel that requirespass with ServiceStack.Redis?

I have a simple redis cluster on my local machine that consists of:
master on port 3679
slave on port 6380
sentinel on port 26379
I am using ServiceStack.Redis to connect with no problems so far. Today I added a password to each of them using the requirepass 42 setting. I can connect to all of them using Redis Desktop Manager fine and everything works as expected.
Using the following code, I get an error when I attempt to connect. Removing the password works as expected.
var config = RedisConfiguration.Instance;
Func<string, string> hostFilter = host => string.IsNullOrEmpty(config.SecurityKey)
? $"{host}?db={config.Database}"
: $"{config.SecurityKey}#{host}?db={config.Database}";
var sentinelHosts = config.SentinelHosts.Select(hostFilter);
var sentinel = new RedisSentinel(sentinelHosts, config.ServiceName)
{
HostFilter = hostFilter,
RedisManagerFactory = (master, slaves) => new RedisManagerPool(master)
};
sentinel.OnFailover += manager => Logger?.Warn($"Redis fail over to {sentinel.GetMaster()}");
sentinel.Start()
This code throw a RedisException "No Redis Sentinels were available" with an inner exception of "unknown command 'AUTH'".
I am not clear if I am using the ServiceStack.Redis library improperly or my Redis cluster configuration is incorrect.
Can some one point me in the right direction?
You can use HostFilter to specify the password:
sentinel.HostFilter = host => $"{config.SecurityKey}#{host}?db={config.Database}";
But when using a password, it needs to be configured everywhere, i.e. in both Master and Slave configurations using:
requirepass password
masterauth password
The Redis Sentinels also need to be configured to use the same password so it can control the redis instances it's monitoring, which can be configured in your sentinel.conf with:
sentinel auth-pass mymaster pasword
The windows-password folder in the redis-config project shows an example of a password-protected Redis Sentinel configuration.