Monitoring rabbitmq (v 3.6.8) with prometheus - rabbitmq

I have a challenge - build and release monitoring system in consist of RabbitMQ cluster (with 3 nodes) and standalone Grafana server for visualisation metrics.
I have found in official documentation of prometheus plugin for RabbitMQ (documentation) next section:
This plugin is new as of RabbitMQ 3.8.0.
But i have cluster of version 3.6.8 and when i run the next command
rabbitmq-plugins enable rabbitmq_prometheus
The output is:
Error: The following plugins could not be found:
rabbitmq_prometheus
Upgrade the cluster now is not possible and my question is:
How do i may configure monitoring of the cluster without upgrade it and with prometheus (preferred option) and grafana?
Thanks in advance!

The Prometheus plugin is not the only way to monitor a RabbitMQ cluster.
You can also use the rabbitmq exporter in sidecar. If you are not on a docker platform, you can download the exporter from the release assets and install it as a service somewhere.
It would be best to install the exporter on every server hosting the RabbitMQ node because:
you will need to have as many install as there are nodes (Prometheus is service oriented monitoring)
from the settings, the exporter is accessing the management plugin interface of RabbitMQ; it should stay bound to localhost to reduce attack surface
If your hands are really tied, you can deploy them anywhere (let say on the same server) and point each exporter to a different RabbitMQ node. Prometheus configuration can then identify the underlying service.
- job_name: rabbitmq
honor_labels: true
static_configs:
- targets: ['monitoring-server:97001']
labels:
instance: 'rabbitmq_node_A'
- targets: ['monitoring-server:97002']
labels:
instance: 'rabbitmq_node_B'
# or play with relabeling to acchieve the same.
An important drawback is that there are more cases where the exporter may not be able to access RabbitMQ and you end up alert on events not impacting your RabbitMQ cluster.

Related

Is there any way we can configure hazelcast in master-slave architecture like redis with Spring boot

Currently hazelcast is using cloud discovery for communication.
So if there are 4 kubernetes pods and each of them is having in-memory hazelcast. whenever hazelcast cache is updated in one of the pod, it gets updated in one of the other pod. but in case both of these pods get downscaled and get terminated, the data which is only in these 2 pods is lost. Can we have something like redis where we can provide server, port of the hazelcast cluster and it will be independent of kubernetes pod
Please check the following Blog Post ("Scale without Data Loss!" section) to read how to scale Hazelcast cluster on Kubernetes to avoid data loss.
Also, you can check the official README of hazelcast/hazelcast-kubernetes plugin. There is a section dedicated to scaling there.

flink web.port can not be configured correctly in yarn mode

I want to get flink's metrics information via REST api, my flink is managed by YARN, but after changing web.port configuration in flink-conf.yaml, the change has no affect, and the web.port in the flink dashboard is always 0. So I can not get the flink metrics information via REST api.
Environment:
ubuntu 16.04
openjdk-8
hadoop 2.7.1.2.3.6.0-3796
flink 1.4.0
When running Flink on Yarn, Flink will pick a random port (0) for the web UI in order to avoid port conflicts with other applications running on the same machine.
In order to access the Flink web UI you can query the Yarn web application proxy (YarnResourceManagerURL/proxy/application_/...). But be aware that only GET requests are properly forwarded to the Yarn application.
Alternatively, Flink logs the web UI URL to stdout when starting a Yarn session. Moreover, you can retrieve the chosen port from the log files. In newer versions (>= 1.5) Flink will log Rest endpoint listening at hostname:port on INFO level and in older versions (<= 1.4 or if using the legacy mode) Flink will log Web frontend listening at hostname:port.

Is it a good way to run Kafka on Kubernetes?

For a large online application, use k8s to run it. The scale maybe daily activity user 500,000.
The application inside k8s need messaging feature - Pub/Sub, there are these options:
Kafka
RabbitMQ
Redis
Kafka
It needs zookeeper and good to run on os depends on disk I/O. So if install it into k8s cluster, how? The performance will be worse?
And, if keep Kafka outside of the k8s cluster, connect Kafka from application inside the k8s cluster, how about that performance? They are in the different layer, won't be slow?
RabbitMQ
It's slow than Kafka, but for a daily activity user 500,000 application, is it good enough? If so, maybe it's a good choice.
Redis
It's another option. Maybe the most simple one. But from the internet I got that it will lose message sometimes. If true, that's terrible.
So, the most important thing is, use Kafka(also with zookeeper) on k8s, good or not in this use case?
Yes, running Kafka on Kubernetes is great. Check out this example: https://github.com/Yolean/kubernetes-kafka. It includes ZooKeeper and Kafka as StatefulSets.
PS. Running any of the services in your question on Kubernetes will be pleasant. You can Google the name of the service and "kubernetes" and find example manifests. Many examples here: https://github.com/kubernetes/charts.
For Kafka, you can find some suggestion here. Kubernetes 1.7+ supports local persistent volume, which may be good for Kafka deployment.
You can also take a look to the following project :
https://github.com/EnMasseProject/barnabas
It's about running Kafka on Kubernetes and OpenShift as well. It provides deploying with StatefulSets with persistent volumes or just in memory (for developing or just testing purpose). It provides deploying for Kafka Connect and Prometheus metrics as well.
Another simple configuration of Kafka/Zookeeper on Kubernetes in DigitalOcean with external access:
https://github.com/StanislavKo/k8s_digitalocean_kafka
You can connect to Kafka from outside of AWS/DO/GCE by regular binary protocol. Connection is PLAINTEXT or SASL_PLAINTEXT (user/password).
Kafka cluster is StatefulSet, so you can scale cluster easily.

Creating AMQ network of broker clusters on JBoss Fuse 6.2, without fabric

I want to create (2) broker clusters connected by network of brokers in JBoss Fuse 6.2; each cluster has 2 master/slave pairs.
It's a small cluster, so we don't intend to use Fabric/Zookeeper; everything will be statically configured, no auto discovery.
Questions
Is it possible to use fabric profiles to build the topology, but
avoid using fabric at runtime?
Can we use Git, or something similar, for centrally managing container config files, again, without fabric?
We tried creating profiles using fabric:mq-create, but the command is not available unless a fabric is first created, which defeats the purpose.
No fabric profiles requires using fabric. You can use git to store files, but you cannot have JBoss Fuse automatic use it such as it does with fabric. You would need to use git manually.
The AMQ broker in JBoss Fuse is just standard Apache ActiveMQ so you can configure it manually/static as a network of brokers. It just not very easy to do if you haven't done that before.
See the JBoss A-MQ documentation as that covers the broker: http://www.jboss.org/products/amq/overview/
for example at: https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_A-MQ/6.2/html/Using_Networks_of_Brokers/index.html

How to configure Redis in Spring XD distributed runtime?

The Spring XD documentation (http://docs.spring.io/spring-xd/docs/1.0.0.RC1/reference/html/) recommends Zookeeper to be run in ensemble so that Zookeeper is highly available. There is not lot of details about Redis about high availability.
If I were to run 2 XD admin instances and say 4 Container instances, I see 3 options
should I run a Redis instance in each server that runs container or admin? In that case does the Distributed runtime work properly with different Redis instances handling transport of different modules?
OR
should I run 1 Redis instance in a separate server and configure all XD instances to talk to this instance? In this case 1 instance of Redis is not highly available
OR
should I configure Redis cluster or Redis Sentinel high availability? I am not sure how XD or any other client will connect to a cluster or HA.
Thanks
I would suggest that you run a single Redis instance, there are some settings for persistence that you can change that may meet your requirements.
http://redis.io/topics/persistence
We will be adding support for Redis Sentinal, certainly in the Spring XD 1.1 release, but possibly in a maintenance release depending on what library changes we need to pick up. Spring Data Redis and Spring Boot have recent updates to support Redis Sentinal.
If you are using Redis as a message transport and want higher guarantees, I would switch to using Rabbit HA configuration of the MessageBus.
Cheers,
Mark