MongooseIM mod_event_pusher RabbitMQ - rabbitmq

I trying to understand MongooseIM file configuration ( not easy , this is my point of view ) I spent 2 days to understand how I can config mod_event_pusher & RabbitMQ but not working
This is my config
[auth]
methods = ["http"]
password.format = "plain"
sasl_mechanisms = ["plain"]
[auth.http]
[outgoing_pools.http.auth.connection]
host = "https://---------------"
[outgoing_pools.rabbit.event_pusher.connection]
amqp_host = "---------damqp.com"
amqp_port = 1883
amqp_username = "---------"
amqp_password = "eld_8NZ_________DY8x"
But when I execute ./bin/mongooseimctl live I have some error like
Could not read the TOML configuration file
If someone have an example , it will be great .

The provided configuration file is missing the general section. This section is mandatory because it contains the list of hosts that the server is handling and the default_server_domain, see the documentation.

Related

How to read the data from MQTT Consumer from queue instead of topic using the telegraf

I have an ActiveMQ broker and data is coming to the queue inside this broker.
I am trying to read the data from the same broker but I am not able to read the data.
Below I have given my telegraf configuration. I have provided the topic name.
I tried creating a topic and sending custom data and that data I am able to read properly.
[[inputs.mqtt_consumer]]
servers = ["provided"]
qos = 0
## Topics that will be subscribed to.
topics = [
"topic_name",
]
connection_timeout = "30s"
## If unset, a random client ID will be generated.
client_id = "telegraf"
## Username and password to connect MQTT server.
username = "provided"
password = "provided"
## Optional TLS Config
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Use TLS but skip chain & host verification
# insecure_skip_verify = false
[[inputs.activemq]]
## ActiveMQ WebConsole URL
url = "provided"
## Required ActiveMQ Endpoint
## deprecated in 1.11; use the url option
# server = "192.168.50.10"
# port = 8161
## Credentials for basic HTTP authentication
username = "provided"
password = "provided"
[[outputs.file]]
## ## Files to write to, "stdout" is a specially handled file.
files = ["stdout","/etc/telegraf/metrics.out"]
The data coming from devices is going to the queue, not the topic.
As you can see the data is present inside the queue.
so now coming to my main question how I can read the data from the queue not from the topic using telegraf?
MQTT supports topics by default. You either need to change your message flow to publish to topics, or configure your ActiveMQ broker to use Virtual Topic Subscription Strategy for MQTT (where messages are stored in queues).
ref: https://activemq.apache.org/mqtt
Note: Please edit your post to hide your broker URL and admin password!

Celery tasks from different applications in different log files

I'm looking for configure Celery on my FreeBSD server and I get some issues according to log files.
My configuration:
FreeBSD server
2 Django applications : app1 and app2
Celery is daemonized and Redis
Each application has his own Celery task
My Celery config file:
I have in /etc/default/celeryd_app1 :
# Names of nodes to start
CELERYD_NODES="worker"
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/usr/local/www/app1/venv/bin/celery"
# App instance to use
CELERY_APP="main"
# Where to chdir at start.
CELERYD_CHDIR="/usr/local/www/app1/src/"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# Set logging level to DEBUG
#CELERYD_LOG_LEVEL="DEBUG"
# %n will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/app1/%n%I.log"
CELERYD_PID_FILE="/var/run/celery/app1/%n.pid"
# Workers should run as an unprivileged user.
CELERYD_USER="celery"
CELERYD_GROUP="celery"
# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
I have exactly the same file for celeryd_app2
Django settings file with Celery settings:
CELERY_BROKER_URL = 'redis://localhost:6379'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_RESULT_BACKEND = 'redis://localhost:6379'
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_IGNORE_RESULT = False
CELERY_TASK_TRACK_STARTED = True
# Add a one-minute timeout to all Celery tasks.
CELERYD_TASK_SOFT_TIME_LIMIT = 60
Both settings have the same redis' port.
My issue:
When I execute a celery task for app1, I find logs from this task in app2 log file with an issue like this :
Received unregistered task of type 'app1.task.my_task_for_app1'
...
KeyError: 'app1.task.my_task_for_app1'
There is an issue in my Celery config file ? I have to set different redis port ? If yes, How I can do that ?
Thank you very much
I guess the problem lies in the fact that you are using the same Redis database for both applications:
CELERY_BROKER_URL = 'redis://localhost:6379'
Take a look into the guide for using Redis as a broker. Just change the database for each application, e.g.
CELERY_BROKER_URL = 'redis://localhost:6379/0'
and
CELERY_BROKER_URL = 'redis://localhost:6379/1'

How to declare a variable and reuse it in icinga2 hosts section?

For now I use the below config for Icinga2 host server to work:
vars.health_check["my_module1"]={
host = "HEALTH_CHECK_SERVER_URL"
module = "my_module1"
}
vars.health_check["my_module2"]={
host = "HEALTH_CHECK_SERVER_URL"
module = "my_module2"
}
The problem as you see is that I have to redeclare the same host address. When I put the host address outside of service like below, it does not work and reloading of Icinga2 fails:
end_url = "HEALTH_CHECK_SERVER_URL"
vars.health_check["my_module1"]={
host = "$end_url$"
module = "my_module1"
}
vars.health_check["my_module2"]={
host = "$end_url$"
module = "my_module2"
}
I even tried to use vars.end_url but again the same scenario. How should I declare a variable in Icinga2.
You can use the host's address with $address$ so if the host's address is the what the URL resolves to it should work like:
end_url = "HEALTH_CHECK_SERVER_URL"
vars.health_check["my_module1"]={
host = "$address$"
module = "my_module1"
}
vars.health_check["my_module2"]={
host = "$address$"
module = "my_module2"
}
Have you looked into Icinga2 Director?. It's handy and host configs are more easily managed. Also, monitoring-portal.org Is a good resource for the Icinga Community.
If you use director you can make a clone of the command and then set the arguments to variables like $end_url$ then create the field. Then you can add the field to your template(import) and enter it once there.
For example we use this method for SNMP Community strings. We have a field for $snmp_community$ attached to our templates. So in any command where we need the community we just use this variable. This is how Icinga2 knows all our LAN Distro's community strings, and if we need to change it we just change it once.

How to add muc component without restarting the prosody server

To add muc component without restarting the prosody server
done the following code then trying to execute it using rest api.
but muc component not able to loading .
--------------code begin---------------
localh hm = require "core.hostmanager";
local mm = require "core.modulemanager";
host= "muc.example.co";
hm.activate(host);
local key= "component_module";
local value = "muc";
cmg._M.set(host, key, value);
mm.load_modules_for_host(host);
-----code end-----------------
How can we enable muc service point without reloading the prosody server.
Using below code we can able achieve it
local muc_host= "test.example.com"
local hm = require "core.hostmanager";
local cmg = require "core.configmanager";
local append_text= 'Component "'..muc_host..'" "muc" \n'
--appending the text to config file
local file_name="/etc/prosody/prosody.cfg.lua"
file = io.open(file_name, "a")
file:write(append_text)
file:close()
-- load & activate new muc server
local lstatus=cmg.load(file_name)
local new_server_status = hm.activate(muc_host)
mm.load_modules_for_host(host)

CouchDB Permissions over HTTPS

UPDATE / SUMMARY:
I created a blog article here about the process I went through and my config file has changed slightly from below:
https://medium.com/#silverbackdan/installing-couchdb-2-0-nosql-with-centos-7-and-certbot-lets-encrypt-f412198c3051#.216m9mk1m
Main issues with HTTPS:
If running HTTP and HTTPS, shard dbs appear on HTTPS
Fauxton features lacking over HTTPS (admin user management, config management, setup wizard, Mango indexing/querying)
Not sure if they should be, but databases over HTTP and HTTPS are not the same
I hope I'm just missing something really obvious
ORIGINAL POST:
I'm trying to configure HTTPS (SSL) with CouchDB 2.0. I'm compiling a guide for others to be able to follow as well but have come across some issues.
I think over HTTPS, I don't have the same permissions as when I enable HTTP and use that instead. In Fauxton over HTTP I can see the configuration and I can run the setup procedure. With HTTPS I'm getting errors where it says I cannot create a database (which it tries to do automatically) because they start with an underscore. Most databases get set up but there's a few which show errors such as "_cluster_setup" when I visit the Configuration page.
Additionally I get repeating error messages which does not stop CouchDB, but it says the database "_users" does not exist (database_does_not_exist). It doesn't exist when I enable and connect over HTTP, but it does exist when I connect over HTTPS. If I enable both HTTP and HTTPS then with my HTTPS connection I end up having a lot of shard databases (I'm new to NoSQL and CouchDB so I'm not sure what that's about, but they appear when errors show up similar to the above - creating databases starting with underscores). Either way, I see those shard databases when logged in via HTTPS but not HTTP (Fauxton shows them as "unable to load, and then I am just deleting them from the data directory at the moment)
There are also issues with accessing Fauxton over HTTPS using Chrome, but I think that's a known bug and it's OK to use Firefox or Safari at the moment.
Can anybody tell me if there are any settings which mean that a connection over port 6984 using HTTPS can have the same administrative rights as 5984 of HTTP? ...Or what the permissions issues there may be that results in the HTTPS connection bringing up these errors about underscores at the beginning of table names as I think that could basically resolve my main issues.
Here's my local.ini file which may be of some use (I have also commented out ";httpd={couch_httpd, start_link, []}" in default.ini as it says to here: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=48203146
; CouchDB Configuration Settings
; Custom settings should be made in this file. They will override settings
; in default.ini, but unlike changes made to default.ini, this file won't be
; overwritten on server upgrade.
[couchdb]
;max_document_size = 4294967296 ; bytes
;os_process_timeout = 5000
uuid = **REMOVED**
[couch_peruser]
; If enabled, couch_peruser ensures that a private per-user database
; exists for each document in _users. These databases are writable only
; by the corresponding user. Databases are in the following form:
; userdb-{hex encoded username}
;enable = true
; If set to true and a user is deleted, the respective database gets
; deleted as well.
;delete_dbs = true
[chttpd]
;port = 5984
;bind_address = 0.0.0.0
; Options for the MochiWeb HTTP server.
;server_options = [{backlog, 128}, {acceptor_pool_size, 16}]
; For more socket options, consult Erlang's module 'inet' man page.
;socket_options = [{recbuf, 262144}, {sndbuf, 262144}, {nodelay, true}]
[httpd]
; NOTE that this only configures the "backend" node-local port, not the
; "frontend" clustered port. You probably don't want to change anything in
; this section.
; Uncomment next line to trigger basic-auth popup on unauthorized requests.
WWW-Authenticate = Basic realm="administrator"
bind_address = 0.0.0.0
; Uncomment next line to set the configuration modification whitelist. Only
; whitelisted values may be changed via the /_config URLs. To allow the admin
; to change this value over HTTP, remember to include {httpd,config_whitelist}
; itself. Excluding it from the list would require editing this file to update
; the whitelist.
config_whitelist = [{httpd,config_whitelist}, {log,level}, {etc,etc}]
[query_servers]
;nodejs = /usr/local/bin/couchjs-node /path/to/couchdb/share/server/main.js
[httpd_global_handlers]
;_google = {couch_httpd_proxy, handle_proxy_req, <<"http://www.google.com">>}
[couch_httpd_auth]
; If you set this to true, you should also uncomment the WWW-Authenticate line
; above. If you don't configure a WWW-Authenticate header, CouchDB will send
; Basic realm="server" in order to prevent you getting logged out.
require_valid_user = true
secret = **REMOVED**
[os_daemons]
; For any commands listed here, CouchDB will attempt to ensure that
; the process remains alive. Daemons should monitor their environment
; to know when to exit. This can most easily be accomplished by exiting
; when stdin is closed.
;foo = /path/to/command -with args
[daemons]
; enable SSL support by uncommenting the following line and supply the PEM's below.
; the default ssl port CouchDB listens on is 6984
httpsd = {couch_httpd, start_link, [https]}
[ssl]
cert_file = /home/couchdb/couchdb/certs/cert.pem
key_file = /home/couchdb/couchdb/certs/privkey.pem
;password = somepassword
; set to true to validate peer certificates
;verify_ssl_certificates = false
; Set to true to fail if the client does not send a certificate. Only used if verify_ssl_certificates is true.
;fail_if_no_peer_cert = false
; Path to file containing PEM encoded CA certificates (trusted
; certificates used for verifying a peer certificate). May be omitted if
; you do not want to verify the peer.
cacert_file = /home/couchdb/couchdb/certs/chain.pem
; The verification fun (optional) if not specified, the default
; verification fun will be used.
;verify_fun = {Module, VerifyFun}
; maximum peer certificate depth
ssl_certificate_max_depth = 1
;
; Reject renegotiations that do not live up to RFC 5746.
secure_renegotiate = true
; The cipher suites that should be supported.
; Can be specified in erlang format "{ecdhe_ecdsa,aes_128_cbc,sha256}"
; or in OpenSSL format "ECDHE-ECDSA-AES128-SHA256".
;ciphers = ["ECDHE-ECDSA-AES128-SHA256", "ECDHE-ECDSA-AES128-SHA"]
ciphers = undefined
; The SSL/TLS versions to support
tls_versions = [tlsv1, 'tlsv1.1', 'tlsv1.2']
; To enable Virtual Hosts in CouchDB, add a vhost = path directive. All requests to
; the Virual Host will be redirected to the path. In the example below all requests
; to http://example.com/ are redirected to /database.
; If you run CouchDB on a specific port, include the port number in the vhost:
; example.com:5984 = /database
[vhosts]
REMOVEDDOMAIN.COM:* = ./database
[update_notification]
;unique notifier name=/full/path/to/exe -with "cmd line arg"
; To create an admin account uncomment the '[admins]' section below and add a
; line in the format 'username = password'. When you next start CouchDB, it
; will change the password to a hash (so that your passwords don't linger
; around in plain-text files). You can add more admin accounts with more
; 'username = password' lines. Don't forget to restart CouchDB after
; changing this.
[admins]
;admin = mysecretpassword
**REMOVED** = **REMOVED**
[cors]
origins = *
credentials = true
headers = accept, authorization, content-type, origin, referer
methods = GET, PUT, POST, HEAD, DELETE
I've been in touch with the CouchDB team via a chat. CouchDB has been well tested using haproxy, so I've been advised to simply use haproxy instead as erlang can be very difficult to configure for SSL. I'll update the article I've written with complete instructions using haproxy once I've got everything working.