How to configure Sensu with RMQ and InfluxDB - rabbitmq

I am trying to get started with a monitoring server solution. I got the Sensu Clients, RabbitMQ and Uchiwa configured but then I tried using Graphite but there were so many parts to configure I tried InfluxDB instead. I am stuck configuring Sensu to InfluxDB.
Is there a part missing in the below configuration?
Client [Sensu] > RabbitMQ <> Sensu Server <> InfluxDB <> Grafana
Any suggestions?
cat influx.json
{
"influxdb": {
"hosts" : ["192.168.1.1"],
"host" : "192.168.1.1",
"port" : "8086",
"database" : "sensumetrics",
"time_precision": "s",
"use_ssl" : false,
"verify_ssl" : false,
"initial_delay" : 0.01,
"max_delay" : 30,
"open_timeout" : 5,
"read_timeout" : 300,
"retry" : null,
"prefix" : "",
"denormalize" : true,
"status" : true
}
}
cat handler.json
{
"handlers": {
"influxdb": {
"type": "pipe",
"command": "/opt/sensu/embedded/bin/metrics-influxdb.rb"
}}}
checks1,
{
"checks": {
"check_memory_linux": {
"handlers": ["influxdb","default"],
"command": "/opt/sensu/embedded/bin/check-memory-percent.rb -w 90 -c 95",
"interval": 60,
"occurrences": 5,
"subscribers": [ "TEST" ]
}}}
checks2,
{
"checks": {
"check_cpu_linux-elkctrl-pipe": {
"type": "metric",
"command": "/opt/sensu/embedded/bin/check-cpu.rb -w 80 -c 90",
"subscribers": ["TEST"],
"interval": 10,
"handlers": ["debug","influxdb"]
}}}

To use InfluxDB to persist your data, you must have:
InfluxDB plugin installed (also, installation and usage instructions here)
Definitions for the plugin (an influxdb.json containin at least the host, port, user, password and database to be used by Sensu)
The definition, as other config files, must be in /etc/sensu/conf.d/
Handler configuration set properly (also in conf.d)
Mutator for InfluxDB (extensions)
Your checks must send results to the handler, so their definition must contain:
"handlers": [
"influxdb"
]
Or whatever name you gave your handler.

Case, if the influxdb config you provided above is the full extent of your configuration, it would seem to be missing the username/password attributes required by the influxdb configuration. If they're present, but not provided in the post, no big deal. However, I'd recommend doing the following for your Sensu logs:
grep -i influxdb /var/logs/sensu/sensu-server.log
And seeing if the check result is getting sent to your influxdb instance. If they are, you should be receiving an error that might be pointing a bit more to what's going on.
You can also check your influxdb logs to see if they're getting a post from your Sensu server:
journalctl -u influxdb.service -f
But yeah, if the username/password is missing from the configuration, that'd be the first place that I start.

Related

Dynamic bridging with $BRIDGE/new

I followed instructions found here Mosquitto-Dynamic Bridging and here https://github.com/Tifaifai/mosquitto#to-dynamically-createdeleteshow-a-bridge-use about dynamic bridging in MQTT with Mosquitto.
The last link is the forked version of Mosquitto, but pull request 653 has been included into the main repo of Mosquitto.
So I tried to create my bridge, first with the Mosquitto configuration file. It is a success.
Then I tried to create the bridge dynamically by sending a message on the topic $BRIDGE/new as explained in the second link.
Here is the content:
connection myBridge
address IP_ADDRESS_OF_DISTANT_MOSQUITTO:1883
topic # both 0
remote_clientid myClientID
remote_username myUsername
remote_password myPassword
In fact, I just copied the content of the configuration file that worked fine.
Success? No.
So a I tried the JSON version of the message:
{
"bridges": [
{
"connection": "myBridge",
"addresses": [
{
"address": "IP_ADDRESS_OF_DISTANT_MOSQUITTO",
"port": 1883
}
],
"topic": "#",
"direction": "both",
"qos": 0,
"remote_username": "myUsername",
"remote_password": "myPassword"
}
]
}
Success? Also no.
I forgot something: I use v2.0.12 and v2.0.14 releases of Mosquitto.
Does someone as any clue to help me found the way of using dynamic bridging?
Thanks
If you look at the latest PR for this (https://github.com/eclipse/mosquitto/pull/1926) you can see that this is targeted at mosquitto v2.1.0 so it has not been merged into the master branch and released yet.
So it will not work with v2.0.x
PR 653 was not merged.

Amazon Cloudwatch only receiving mem_used_percent and nothing else, despite numerous other metrics specified in config

I am trying to get CloudWatch running properly on my Lightsail instance, which I appear to achieved with only partial success.
I have ran the Wizard using sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard which has produced a config file outlining numerous metrics including cpu, memory and disk usage as outlined here. The service loads and starts the config file, and doesn't complain about invalid json (this did happen a few times, but I fixed it).
I can stop the service with sudo amazon-cloudwatch-agent-ctl -a stop
I then reload the config with sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -s -m ec2 -c file:/opt/aws/amazon-cloudwatch-agent/bin/config.json
Verify the service is running: sudo amazon-cloudwatch-agent-ctl -a status
Which outputs this:
{
"status": "running",
"starttime": "2022-01-10T21:53:12+00:00",
"configstatus": "configured",
"cwoc_status": "stopped",
"cwoc_starttime": "",
"cwoc_configstatus": "not configured",
"version": "1.247349.0b251399"
}
Logging into my CloudWatch console, I can see the data being received, and the single line appearing on the graph there corresponds to the times that I started and stopped the service-- so it's definitely doing something. And yet... the only metric that appears on that graph is mem_used_percent... why? Why only this one metric? Where is the rest of my data pertaining to cpu, etc? What am I doing wrong?
Here is my config.json, which as I said, is being loaded by the service without issue.
{
"agent": {
"metrics_collection_interval": 60,
"run_as_user": "root"
},
"metrics": {
"append_dimensions": {
"ImageID": "${aws:ImageId}",
"InstanceId":"${aws:InstanceId}",
"InstanceType":"${aws:InstanceType}"
},
"metrics_collected": {
"cpu": {
"resources": [
"*"
],
"measurement": [
"cpu_usage_active"
],
"metrics_collection_interval": 60,
"totalcpu": false
},
"disk": {
"measurement": [
"free",
"total",
"used",
"used_percent"
],
"metrics_collection_interval": 60,
"resources": [
"*"
]
},
"mem": {
"measurement": [
"mem_active",
"mem_available",
"mem_available_percent",
"mem_free",
"mem_total",
"mem_used",
"mem_used_percent"
],
"metrics_collection_interval": 60
},
"netstat": {
"measurement": [
"tcp_established",
"udp_socket"
]
}
}
}
}
Any help greatly appreciated here. TIA.
You likely haven't fetched the configuration yet.
Check the logfile, i.e. /opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log, to see which inputs are loaded:
2022-05-18T10:18:57Z I! Loaded inputs: mem disk
To fetch the configuration, do as follows (you'll need to adapt this to your environment - this is for systemd, on-premise, without SSM):
sudo amazon-cloudwatch-agent-ctl -a fetch-config -m onPremise -c file:/opt/aws/amazon-cloudwatch-agent/bin/config.json
sudo systemctl status amazon-cloudwatch-agent.service restart
After:
2022-05-18T11:45:05Z I! Loaded inputs: mem net netstat swap cpu disk diskio
Maybe you face the same issue as I did. In my case two configuration json files
/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.d/file_config.json
/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json
were merged.
The files are then translated to
/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.toml.
When I was checking the file, only the mem definition of /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.d/file_config.json was taken. Thus, I deleted the file and restarted the service.
sudo systemctl restart amazon-cloudwatch-agent
After the restart, the toml file contained what I expected and the metrics were in place.

Unable to connect to a site from Testcafe IDE on OSX

My fixtures are set up like so
{
"fixtures": [
{
"name": "login",
"pageUrl": "http:\/\/localhost:3000\/",
"tests": [
{
"name": "type name",
"commands": [
{
"type": "type-text",
"studio": {
},
"callsite": "0",
"selector": {
"type": "js-expr",
"value": "input[type=email]"
},
"options": {
},
"text": "example#email.com"
}
]
}
]
}
]
}
with one simple test to find the input and type some text but when running the command I get
testcafe chrome login.testcafe
ERROR Unable to establish one or more of the specified browser connections. This can be caused by network issues or remote device failure.
Type "testcafe -h" for help.
I've seen this issue a couple of times on their issues board one relating to CI integration on a Linux server and another which seems like a similar issue of trying to connect to localhost
https://github.com/DevExpress/testcafe-browser-provider-electron/issues/20
https://github.com/DevExpress/testcafe/issues/1133
New to testcafe any help would be appreciated!
I've found the solution some network policies don't allow access to your machine on some ports in my example it's 57501.
testcafe chrome login.testcafe --hostname localhost
adding --hostname resolves the issue
documentation
https://devexpress.github.io/testcafe/documentation/using-testcafe/command-line-interface.html#--hostname-name
I still don't know how to launch from the IDE but this resolves my main issue.
TestCafe Studio Preview does not support setting command line options (hostname in your case). The TestCafe team is going to implement this functionality in the official release.
So, for now, it is only possible to run tests via a command line.
UPDATE:
You can set the hostname option in the TestCafe Studio Settings dialog:

OpenShift Aggregated Logging: Parse Apache access log

When using OpenShift Aggregated Logging I get logs nicely fed into elasticsearch. However, the line as logged by apache ends up in a message field.
I'd like to create queries in Kibana where I can access the url, the status code and other fields individually. For that the special apache access log parsing needs to be done.
How can I do that?
This is an example entry as seen in kibana:
{
"_index": "42-steinbruchsteiner-staging.3af0bedd-eebc-11e6-af4b-005056a62fa6.2017.03.29",
"_type": "fluentd",
"_id": "AVsY3aSK190OXhxv4GIF",
"_score": null,
"_source": {
"time": "2017-03-29T07:00:25.595959397Z",
"docker_container_id": "9f4fa85a626d2f5197f0028c05e8e42271db7a4c674cc145204b67b6578f3378",
"kubernetes_namespace_name": "42-steinbruchsteiner-staging",
"kubernetes_pod_id": "56c61b65-0b0e-11e7-82e9-005056a62fa6",
"kubernetes_pod_name": "php-app-3-weice",
"kubernetes_container_name": "php-app",
"kubernetes_labels_deployment": "php-app-3",
"kubernetes_labels_deploymentconfig": "php-app",
"kubernetes_labels_name": "php-app",
"kubernetes_host": "itsrv1564.esrv.local",
"kubernetes_namespace_id": "3af0bedd-eebc-11e6-af4b-005056a62fa6",
"hostname": "itsrv1564.esrv.local",
"message": "10.1.3.1 - - [29/Mar/2017:01:59:21 +0200] "GET /kwf/status/health HTTP/1.1" 200 2 "-" "Go-http-client/1.1"\n",
"version": "1.3.0"
},
"fields": {
"time": [
1490770825595
]
},
"sort": [
1490770825595
]
}
Disclaimer: I did not test this out in openshift. I don't know which tech stack you are using for your microservice.
This is how I do this in a spring boot application (with logback) deployed in Kubernetes.
1. Use logstash encoder for logback (This will write logs in Json format which is more ELK stack friendly)
I have a gradle dependency to enable this
compile "net.logstash.logback:logstash-logback-encoder:3.5"
Then configure LogstashEncoder as encoder in the appender, in logback-spring.groovy/logback-spring.xml (or logabck.xml)
2. Have some filters or libraries to write the access log
For 2. Either use
A. Use "net.rakugakibox.springbootext:spring-boot-ext-logback-access:1.6" library
(This is what I am using)
It gives in a nice json format, as follows
{
"#timestamp":"2017-03-29T09:43:09.536-05:00",
"#version":1,
"#message":"0:0:0:0:0:0:0:1 - - [2017-03-29T09:43:09.536-05:00] \"GET /orders/v1/items/42 HTTP/1.1\" 200 991",
"#fields.method":"GET",
"#fields.protocol":"HTTP/1.1",
"#fields.status_code":200,
"#fields.requested_url":"GET /orders/v1/items/42 HTTP/1.1",
"#fields.requested_uri":"/orders/v1/items/42",
"#fields.remote_host":"0:0:0:0:0:0:0:1",
"#fields.HOSTNAME":"0:0:0:0:0:0:0:1",
"#fields.content_length":991,
"#fields.elapsed_time":48,
"HOSTNAME":"ABCD"
}
OR
B. Use Logback's Tee Filter
OR
C. Spring's CommonsRequestLoggingFilter (Did not really test this out)
Add a bean definition
#Bean
public CommonsRequestLoggingFilter requestLoggingFilter() {
CommonsRequestLoggingFilter crlf = new CommonsRequestLoggingFilter();
crlf.setIncludeClientInfo(true);
crlf.setIncludeQueryString(true);
crlf.setIncludePayload(true);
return crlf;
}
Then set org.springframework.web.filter.CommonsRequestLoggingFilter to DEBUG, this can be done using the application.properties by adding:
logging.level.org.springframework.web.filter.CommonsRequestLoggingFilter=DEBUG

How to attach pre-uploaded SSL cert to ELB in CloudFormation template?

I've been trying to attach a SSL certificate that I'm currently using for one of my Elastic Load Balancing Instances on a new Cloud Formation Template but each time I get:
Server Certificate not found for the key
And then the Cloudformation template starts to roll back at that point.
"Listeners" : [
{
"LoadBalancerPort" : "443",
"InstancePort" : "80",
"SSLCertificateId" : "start_certname_com",
"Protocol" : "HTTPS"
},...
Amazon is asking for the The ARN of the SSL certificate to use. and I believe this is correct since this is the exact string which appears in the dropdown of the current set up ELB which takes 443 to port 80 on the instances.
Am I missing something on my Listener?
You can derive the ARN for a certificate in CloudFormation with only the certificate name. No need to run a command line tool and hard code the value into your CloudFormation template.
"Parameters":{
"Path":{
"Description":"AWS Path",
"Default":"/",
"Type":"String"
}
}
...
"Listeners" : [
{
"LoadBalancerPort" : "443",
"InstancePort" : "80",
"SSLCertificateId" : {
"Fn::Join":[
"",
[
"arn:aws:iam::",
{
"Ref":"AWS::AccountId"
},
":server-certificate",
{
"Ref":"Path"
},
"start_certname_com"
]
]
},
"Protocol" : "HTTPS"
},...
This determines your account id with the {"Ref":"AWS::AccountId"} pseudo parameter and combines it with the other elements needed to form the ARN. Note that I'm using a variable called Path in case you've set a path for your certificate. If not the default of "/" works fine.
This solution was mentioned by #Tristan and is an extension of merrix143243's solution
I've actually figured out how to do this while waiting for the answer, you need to use the IAM CLI tools provided by amazon and then use the command
iam-servercertgetattributes -s certname
This will provide you a string like:
arn:aws:iam::123456789123:server-certificate/start_certname_com
This is the value you place in the "SSLCertificateId" value pair field
The setup instructions for the IAM command line tools (CLI) can be found at:
http://docs.aws.amazon.com/IAM/latest/CLIReference/Setup.html
Download the tool kit from aws here
http://aws.amazon.com/developertools/AWS-Identity-and-Access-Management/4143
All in all your final block will look like:
"Listeners" : [
{
"LoadBalancerPort" : "443",
"InstancePort" : "80",
"SSLCertificateId" : "arn:aws:iam::123456789123:server-certificate/start_certname_com",
"Protocol" : "HTTPS"
},...
Here's how you get the long cert name with the latest AWS CLI:
pip install awscli
aws iam list-server-certificates