How to read the data from MQTT Consumer from queue instead of topic using the telegraf - activemq

I have an ActiveMQ broker and data is coming to the queue inside this broker.
I am trying to read the data from the same broker but I am not able to read the data.
Below I have given my telegraf configuration. I have provided the topic name.
I tried creating a topic and sending custom data and that data I am able to read properly.
[[inputs.mqtt_consumer]]
servers = ["provided"]
qos = 0
## Topics that will be subscribed to.
topics = [
"topic_name",
]
connection_timeout = "30s"
## If unset, a random client ID will be generated.
client_id = "telegraf"
## Username and password to connect MQTT server.
username = "provided"
password = "provided"
## Optional TLS Config
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Use TLS but skip chain & host verification
# insecure_skip_verify = false
[[inputs.activemq]]
## ActiveMQ WebConsole URL
url = "provided"
## Required ActiveMQ Endpoint
## deprecated in 1.11; use the url option
# server = "192.168.50.10"
# port = 8161
## Credentials for basic HTTP authentication
username = "provided"
password = "provided"
[[outputs.file]]
## ## Files to write to, "stdout" is a specially handled file.
files = ["stdout","/etc/telegraf/metrics.out"]
The data coming from devices is going to the queue, not the topic.
As you can see the data is present inside the queue.
so now coming to my main question how I can read the data from the queue not from the topic using telegraf?

MQTT supports topics by default. You either need to change your message flow to publish to topics, or configure your ActiveMQ broker to use Virtual Topic Subscription Strategy for MQTT (where messages are stored in queues).
ref: https://activemq.apache.org/mqtt
Note: Please edit your post to hide your broker URL and admin password!

Related

MongooseIM mod_event_pusher RabbitMQ

I trying to understand MongooseIM file configuration ( not easy , this is my point of view ) I spent 2 days to understand how I can config mod_event_pusher & RabbitMQ but not working
This is my config
[auth]
methods = ["http"]
password.format = "plain"
sasl_mechanisms = ["plain"]
[auth.http]
[outgoing_pools.http.auth.connection]
host = "https://---------------"
[outgoing_pools.rabbit.event_pusher.connection]
amqp_host = "---------damqp.com"
amqp_port = 1883
amqp_username = "---------"
amqp_password = "eld_8NZ_________DY8x"
But when I execute ./bin/mongooseimctl live I have some error like
Could not read the TOML configuration file
If someone have an example , it will be great .
The provided configuration file is missing the general section. This section is mandatory because it contains the list of hosts that the server is handling and the default_server_domain, see the documentation.

TLS-Encrypted Connection with RabbitMQ Using pika

I am finding it impossible to set up an encrypted connection with a RabbitMQ broker using python's pika library on the client side. My starting point was the pika tutorial example here but I cannot make it work. I have proceeded as follows.
(1) The RabbitMQ configuration file was:
listeners.tcp.default = 5672
listeners.ssl.default = 5671
ssl_options.verify = verify_peer
ssl_options.fail_if_no_peer_cert = false
ssl_options.cacertfile = /etc/cert/tms.crt
ssl_options.certfile = /etc/cert/tms.crt
ssl_options.keyfile = /etc/cert/tmsPrivKey.pem
auth_mechanisms.1 = PLAIN
auth_mechanisms.2 = AMQPLAIN
auth_mechanisms.3 = EXTERNAL
(2) The rabbitmq-auth-mechanism-ssl plugin was enabled with the following command:
rabbitmq-plugins enable rabbitmq_auth_mechanism_ssl
Successful enabling was confirmed by checking the enable status through: rabbitmq-plugins list.
(3) The correctness of the TLS certificates was verified by using openssl tools as described here.
(4) The client-side program to set up the connection was:
#!/usr/bin/env python
import logging
import pika
import ssl
from pika.credentials import ExternalCredentials
logging.basicConfig(level=logging.INFO)
context = ssl.create_default_context(
cafile="/Xyz/sampleNodeCert/tms.crt")
context.load_cert_chain("/Xyz/sampleNodeCert/node.crt",
"/Xyz/sampleNodeCert/nodePrivKey.pem")
ssl_options = pika.SSLOptions(context, '127.0.0.1')
conn_params = pika.ConnectionParameters(host='127.0.0.1',
port=5671,
ssl_options=ssl_options,
credentials=ExternalCredentials())
with pika.BlockingConnection(conn_params) as conn:
ch = conn.channel()
ch.queue_declare("foobar")
ch.basic_publish("", "foobar", "Hello, world!")
print(ch.basic_get("foobar"))
(5) The client-side program failed with the following error message:
pika.exceptions.ProbableAuthenticationError: ConnectionClosedByBroker: (403) 'ACCESS_REFUSED - Login was refused using authentication mechanism EXTERNAL. For details see the broker logfile.'
(6) The log message in the RabbitMQ broker was:
2019-10-15 20:17:46.028 [info] <0.642.0> accepting AMQP connection <0.642.0> (127.0.0.1:48252 -> 127.0.0.1:5671)
2019-10-15 20:17:46.032 [error] <0.642.0> Error on AMQP connection <0.642.0> (127.0.0.1:48252 -> 127.0.0.1:5671, state: starting):
EXTERNAL login refused: user 'CN=www.node.com,O=Node GmbH,L=NodeTown,ST=NodeProvince,C=DE' - invalid credentials
2019-10-15 20:17:46.043 [info] <0.642.0> closing AMQP connection <0.642.0> (127.0.0.1:48252 -> 127.0.0.1:5671)
(7) The environment in which this test was done is Ubuntu 18.04 using RabbitMQ 3.7.17 on Erlang 22.0.7. On the client side, python3 version 3.6.8 was used.
Questions: Does anyone have any idea as to why my test fails? Where can I find a complete working example of setting up an encrypted connection to RabbitMQ using pika?
NB: I am familiar with this post but none of the tips in the post helped me.
After studying the link provided above by Luke Bakken, I am now in a position to answer my own question. The main change with respect to my original example is that I configure the RabbitMQ broker with a passwordless user which has the same name as the CN field of the TLS certificate on both the server and the client side. To illustrate, below, I go through my example again in detail:
(1) The RabbitMQ configuration file is:
listeners.tcp.default = 5672
listeners.ssl.default = 5671
ssl_cert_login_from = common_name
ssl_options.verify = verify_peer
ssl_options.fail_if_no_peer_cert = true
ssl_options.cacertfile = /etc/cert/tms.crt
ssl_options.certfile = /etc/cert/tms.crt
ssl_options.keyfile = /etc/cert/tmsPrivKey.pem
auth_mechanisms.1 = EXTERNAL
auth_mechanisms.2 = PLAIN
auth_mechanisms.3 = AMQPLAIN
Note that, with the ssl_cert_login_from configuration option, I am asking for the username of the RabbitMQ account to be taken from the "common name" (CN) field of the TLS certificate.
(2) The rabbitmq-auth-mechanism-ssl plugin is enabled with the following command:
rabbitmq-plugins enable rabbitmq_auth_mechanism_ssl
Successful enabling can be confirmed by checking the enable status through command: rabbitmq-plugins list.
(3) The signed TLS certificate must have the issuer and subject CN fields equal to each other and equal to the hostname of the RabbitMQ broker node. In my case, inspection of the RabbitMQ log file (in /var/log/rabbitmq) shows that the broker is running on a node called: rabbit#pnp-vm2. The host name is therefore pnp-vm2. In order to check the CN fields of the client-side certificate, I use the following command:
ap#pnp-vm2:openssl x509 -noout -text -in /etc/cert/node.crt | fgrep CN
Issuer: C = CH, ST = CH, L = Location, O = Organization GmbH, CN = pnp-vm2
Subject: C = DE, ST = NodeProvince, L = NodeTown, O = Node GmbH, CN = pnp-vm2
As you can see, both the Issuer CN field and the Subject CN Field are equal to: "pnp-vm2" (which is the hostname of the RabbitMQ broker, see above). I tried using this name for only one of the two CN fields but then the connection to the broker could not be established. In my test environment, it was easy to create a client certificate with identical CN names but, in an operational environment, this may be a lot harder to do. Also, I do not quite understand the reason for this constraint: is it a bug or it is a feature? And does it originate in the particular RabbitMQ library I am using (python's pika) or in the AMQP protocol? These question probably deserve a dedicated post.
(4) The client-side program to set up the connection is:
#!/usr/bin/env python
import logging
import pika
import ssl
from pika.credentials import ExternalCredentials
logging.basicConfig(level=logging.INFO)
context = ssl.create_default_context(cafile="/home/ap/RocheTe/cert/sampleNodeCert/tms.crt")
context.load_cert_chain("/home/ap/RocheTe/cert/sampleNodeCert/node.crt",
"/home/ap/RocheTe/cert/sampleNodeCert/nodePrivKey.pem")
ssl_options = pika.SSLOptions(context, 'pnp-vm2')
conn_params = pika.ConnectionParameters(host='a.b.c.d',
port=5671,
ssl_options=ssl_options,
credentials=ExternalCredentials(),
heartbeat=0)
with pika.BlockingConnection(conn_params) as conn:
ch = conn.channel()
ch.queue_declare("foobar")
ch.basic_publish("", "foobar", "Hello, world!")
print(ch.basic_get("foobar"))
input("Press Enter to continue...")
Here, "a.b.c.d" is the IP address of the machine on which the RabbitMQ broker is running.
(5) The environment in which this test was done is Ubuntu 18.04 using RabbitMQ 3.7.17 on Erlang 22.0.7. On the client side, python3 version 3.6.8 was used.
One final word of warning: with this configuration, I was able to establish a secure connection to the RabbitMQ Broker but, for reasons which I still do not understand, it became impossible to start the RabbitMQ Web Management Tool...

Celery tasks from different applications in different log files

I'm looking for configure Celery on my FreeBSD server and I get some issues according to log files.
My configuration:
FreeBSD server
2 Django applications : app1 and app2
Celery is daemonized and Redis
Each application has his own Celery task
My Celery config file:
I have in /etc/default/celeryd_app1 :
# Names of nodes to start
CELERYD_NODES="worker"
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/usr/local/www/app1/venv/bin/celery"
# App instance to use
CELERY_APP="main"
# Where to chdir at start.
CELERYD_CHDIR="/usr/local/www/app1/src/"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# Set logging level to DEBUG
#CELERYD_LOG_LEVEL="DEBUG"
# %n will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/app1/%n%I.log"
CELERYD_PID_FILE="/var/run/celery/app1/%n.pid"
# Workers should run as an unprivileged user.
CELERYD_USER="celery"
CELERYD_GROUP="celery"
# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
I have exactly the same file for celeryd_app2
Django settings file with Celery settings:
CELERY_BROKER_URL = 'redis://localhost:6379'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_RESULT_BACKEND = 'redis://localhost:6379'
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_IGNORE_RESULT = False
CELERY_TASK_TRACK_STARTED = True
# Add a one-minute timeout to all Celery tasks.
CELERYD_TASK_SOFT_TIME_LIMIT = 60
Both settings have the same redis' port.
My issue:
When I execute a celery task for app1, I find logs from this task in app2 log file with an issue like this :
Received unregistered task of type 'app1.task.my_task_for_app1'
...
KeyError: 'app1.task.my_task_for_app1'
There is an issue in my Celery config file ? I have to set different redis port ? If yes, How I can do that ?
Thank you very much
I guess the problem lies in the fact that you are using the same Redis database for both applications:
CELERY_BROKER_URL = 'redis://localhost:6379'
Take a look into the guide for using Redis as a broker. Just change the database for each application, e.g.
CELERY_BROKER_URL = 'redis://localhost:6379/0'
and
CELERY_BROKER_URL = 'redis://localhost:6379/1'

CouchDB Permissions over HTTPS

UPDATE / SUMMARY:
I created a blog article here about the process I went through and my config file has changed slightly from below:
https://medium.com/#silverbackdan/installing-couchdb-2-0-nosql-with-centos-7-and-certbot-lets-encrypt-f412198c3051#.216m9mk1m
Main issues with HTTPS:
If running HTTP and HTTPS, shard dbs appear on HTTPS
Fauxton features lacking over HTTPS (admin user management, config management, setup wizard, Mango indexing/querying)
Not sure if they should be, but databases over HTTP and HTTPS are not the same
I hope I'm just missing something really obvious
ORIGINAL POST:
I'm trying to configure HTTPS (SSL) with CouchDB 2.0. I'm compiling a guide for others to be able to follow as well but have come across some issues.
I think over HTTPS, I don't have the same permissions as when I enable HTTP and use that instead. In Fauxton over HTTP I can see the configuration and I can run the setup procedure. With HTTPS I'm getting errors where it says I cannot create a database (which it tries to do automatically) because they start with an underscore. Most databases get set up but there's a few which show errors such as "_cluster_setup" when I visit the Configuration page.
Additionally I get repeating error messages which does not stop CouchDB, but it says the database "_users" does not exist (database_does_not_exist). It doesn't exist when I enable and connect over HTTP, but it does exist when I connect over HTTPS. If I enable both HTTP and HTTPS then with my HTTPS connection I end up having a lot of shard databases (I'm new to NoSQL and CouchDB so I'm not sure what that's about, but they appear when errors show up similar to the above - creating databases starting with underscores). Either way, I see those shard databases when logged in via HTTPS but not HTTP (Fauxton shows them as "unable to load, and then I am just deleting them from the data directory at the moment)
There are also issues with accessing Fauxton over HTTPS using Chrome, but I think that's a known bug and it's OK to use Firefox or Safari at the moment.
Can anybody tell me if there are any settings which mean that a connection over port 6984 using HTTPS can have the same administrative rights as 5984 of HTTP? ...Or what the permissions issues there may be that results in the HTTPS connection bringing up these errors about underscores at the beginning of table names as I think that could basically resolve my main issues.
Here's my local.ini file which may be of some use (I have also commented out ";httpd={couch_httpd, start_link, []}" in default.ini as it says to here: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=48203146
; CouchDB Configuration Settings
; Custom settings should be made in this file. They will override settings
; in default.ini, but unlike changes made to default.ini, this file won't be
; overwritten on server upgrade.
[couchdb]
;max_document_size = 4294967296 ; bytes
;os_process_timeout = 5000
uuid = **REMOVED**
[couch_peruser]
; If enabled, couch_peruser ensures that a private per-user database
; exists for each document in _users. These databases are writable only
; by the corresponding user. Databases are in the following form:
; userdb-{hex encoded username}
;enable = true
; If set to true and a user is deleted, the respective database gets
; deleted as well.
;delete_dbs = true
[chttpd]
;port = 5984
;bind_address = 0.0.0.0
; Options for the MochiWeb HTTP server.
;server_options = [{backlog, 128}, {acceptor_pool_size, 16}]
; For more socket options, consult Erlang's module 'inet' man page.
;socket_options = [{recbuf, 262144}, {sndbuf, 262144}, {nodelay, true}]
[httpd]
; NOTE that this only configures the "backend" node-local port, not the
; "frontend" clustered port. You probably don't want to change anything in
; this section.
; Uncomment next line to trigger basic-auth popup on unauthorized requests.
WWW-Authenticate = Basic realm="administrator"
bind_address = 0.0.0.0
; Uncomment next line to set the configuration modification whitelist. Only
; whitelisted values may be changed via the /_config URLs. To allow the admin
; to change this value over HTTP, remember to include {httpd,config_whitelist}
; itself. Excluding it from the list would require editing this file to update
; the whitelist.
config_whitelist = [{httpd,config_whitelist}, {log,level}, {etc,etc}]
[query_servers]
;nodejs = /usr/local/bin/couchjs-node /path/to/couchdb/share/server/main.js
[httpd_global_handlers]
;_google = {couch_httpd_proxy, handle_proxy_req, <<"http://www.google.com">>}
[couch_httpd_auth]
; If you set this to true, you should also uncomment the WWW-Authenticate line
; above. If you don't configure a WWW-Authenticate header, CouchDB will send
; Basic realm="server" in order to prevent you getting logged out.
require_valid_user = true
secret = **REMOVED**
[os_daemons]
; For any commands listed here, CouchDB will attempt to ensure that
; the process remains alive. Daemons should monitor their environment
; to know when to exit. This can most easily be accomplished by exiting
; when stdin is closed.
;foo = /path/to/command -with args
[daemons]
; enable SSL support by uncommenting the following line and supply the PEM's below.
; the default ssl port CouchDB listens on is 6984
httpsd = {couch_httpd, start_link, [https]}
[ssl]
cert_file = /home/couchdb/couchdb/certs/cert.pem
key_file = /home/couchdb/couchdb/certs/privkey.pem
;password = somepassword
; set to true to validate peer certificates
;verify_ssl_certificates = false
; Set to true to fail if the client does not send a certificate. Only used if verify_ssl_certificates is true.
;fail_if_no_peer_cert = false
; Path to file containing PEM encoded CA certificates (trusted
; certificates used for verifying a peer certificate). May be omitted if
; you do not want to verify the peer.
cacert_file = /home/couchdb/couchdb/certs/chain.pem
; The verification fun (optional) if not specified, the default
; verification fun will be used.
;verify_fun = {Module, VerifyFun}
; maximum peer certificate depth
ssl_certificate_max_depth = 1
;
; Reject renegotiations that do not live up to RFC 5746.
secure_renegotiate = true
; The cipher suites that should be supported.
; Can be specified in erlang format "{ecdhe_ecdsa,aes_128_cbc,sha256}"
; or in OpenSSL format "ECDHE-ECDSA-AES128-SHA256".
;ciphers = ["ECDHE-ECDSA-AES128-SHA256", "ECDHE-ECDSA-AES128-SHA"]
ciphers = undefined
; The SSL/TLS versions to support
tls_versions = [tlsv1, 'tlsv1.1', 'tlsv1.2']
; To enable Virtual Hosts in CouchDB, add a vhost = path directive. All requests to
; the Virual Host will be redirected to the path. In the example below all requests
; to http://example.com/ are redirected to /database.
; If you run CouchDB on a specific port, include the port number in the vhost:
; example.com:5984 = /database
[vhosts]
REMOVEDDOMAIN.COM:* = ./database
[update_notification]
;unique notifier name=/full/path/to/exe -with "cmd line arg"
; To create an admin account uncomment the '[admins]' section below and add a
; line in the format 'username = password'. When you next start CouchDB, it
; will change the password to a hash (so that your passwords don't linger
; around in plain-text files). You can add more admin accounts with more
; 'username = password' lines. Don't forget to restart CouchDB after
; changing this.
[admins]
;admin = mysecretpassword
**REMOVED** = **REMOVED**
[cors]
origins = *
credentials = true
headers = accept, authorization, content-type, origin, referer
methods = GET, PUT, POST, HEAD, DELETE
I've been in touch with the CouchDB team via a chat. CouchDB has been well tested using haproxy, so I've been advised to simply use haproxy instead as erlang can be very difficult to configure for SSL. I'll update the article I've written with complete instructions using haproxy once I've got everything working.

how to force close a client connection rabbitmq

I have a client server application that uses rabbitmq broker.
Client connects to rabbitmq and send messages to server. At some point if server decides that this client should not be connected to rabbitmq i want to be able to force disconnect client from rabbitmq border.
Note that in my case I don't want to send message to client to disconnect, on server side I want to just force disconnect this client from rabbitmq.
Couldn't find api to do this. Any help is appriciated.
You can use the management console plug-in in two ways:
Manually, going to the connection and "force close".
Through the HTTP API using "delete" /api/connections/name, here an python example:
import urllib2, base64
def calljsonAPI(rabbitmqhost, api):
request = urllib2.Request("http://" + rabbitmqhost + ":15672/api/" + api);
base64string = base64.encodestring('%s:%s' % ('guest', 'guest')).replace('\n', '')
request.add_header("Authorization", "Basic %s" % base64string);
request.get_method = lambda: 'DELETE';
urllib2.urlopen(request);
if __name__ == '__main__':
RabbitmqHost = "localhost";
#here you should get the connection detail through the api,
calljsonAPI(RabbitmqHost, "connections/127.0.0.1%3A49258%20-%3E%20127.0.0.1%3A5672");
You can use rabbitmqctl for close/force-close connections:
rabbitmqctl close_connection <connectionpid> <explanation>
<connectionpid> is from:
rabbitmqctl list_connections
#or
rabbitmqctl list_consumers