How to get Units in the CloudWatch metrics ingested via the amazon-cloudwatch-agent collectd socket listener? - amazon-cloudwatch

I have the amazon-cloudwatch-agent configured to retrieve custom metrics via collectd.
That is working in the sense that I can see the metrics in CloudWatch UI, but they have no unit (for example the collectd_GenericJMX_java_heap_used should have Bytes as CloudWatch Unit).
As far as I know collectd does not have the concept of units, and it seems that neither does telegraf.Metric (which I think is the underlying concept that is used in the amazon-cloudwatch-agent to ingest collectd metrics).
Is there any way, to specify the "units" for collectd metrics?

Related

Custom Redis/Jedis Metrics

I am attempting to add additional custom metrics to my Jedis branch, but cannot figure out where they are handled. I have searched for the Prometheus metrics (by metric name) in Jedis and all of the other Redis projects with no success.
Where in the code-base are the current metrics for Redis/Jedis configured?

how to Identify the cloud watch metrics for specific KCL in kinesis streams

we have multiple kinesis consumer applications(KCL 2.0) are consuming the data from the same kinesis stream. All the consumer is sending the metrics to cloud watch and in the cloud watch those are showing up.
If i wanted to specifically understood and scale to multiple instances of one consumer application. how can we achive that... ?
cloud watch metrics Get records iterator age, Incoming data - sum (Count)
KCL metrics are provided under separate namespace in cloudwatch. The namespace that is used to upload metrics is the applicationName that you provide in KCL configuration. So if you have multiple KCL application with differnt applicationName then, you will find those metrics in cloudwatch metric console under "custom namespaces"
complete list of KCL metrics can be found here

Sensor data timestamps using VictoriaMetrics

I am trying to figure out how to record timestamped sensor data to an instance of VictoriaMetrics. I have an embedded controller with a sensor that is read once per second. I would like VictoriaMetrics to poll the controller once a minute, and log all 60 readings with their associated timestamps into the TSDB.
I have the server and client running, and measuring system metrics is easy, but I can't find an example of how to get a batch of sensor readings to be reported by the embedded client, nor have I been able to figure it out from the docs.
Any insights are welcome!
VictoriaMetrics supports data ingestion via various protocols. All these protocols support batching, i.e. multiple measurements can be sent in a single request. So you can choose the best suited protocol for inserting batches of collected measurements into VictoriaMetrics. For example, if Prometheus text exposition format is selected for data ingestion, then a batch of metrics could look like the following:
measurement_name{optional="labels"} value1 timestamp1
...
measurement_name{optional="labels"} valueN timestampN
VictoriaMetrics can poll (scrape) metrics from the configured address via HTTP. It expects application to return metrics value in the exposition text format. The exposition text format is compatible with Prometheus, so its libraries for different languages will be compatible with VictoriaMetrics as well.
There is also a how-to guide for instrumenting golang application to expose metrics and scrape via VictoriaMetrics here. It describes the monitoring basics for any service or application.

API or other queryable source for getting total NiFi queued data

Is there an API point or whatever other queryable source where I can get the total queued data?:
setting up a little dataflow in NiFi to monitor NiFi itself sounds sketchy, but if it's a common practice, let's be it. Anyway, I cannot find the API endpoint to get that total
Note: I have a single NiFi instance: I don't have nor will implement S2S reporting since I am on a single instance, single node NiFi setup
The Site-to-Site Reporting tasks were developed because they work for clustered, standalone, and multiple instances thereof. You'd just need to put an Input Port on your canvas and have the reporting task send to that.
An alternative as of NiFi 1.10.0 (via NIFI-6780) is to get the nifi-sql-reporting-nar and use QueryNiFiReportingTask, you can use a SQL query to get the metrics you want. That uses a RecordSinkService controller service to determine how to send the results, there are various implementations such as Site-to-Site, Kafka, Database, etc. The NAR is not included in the standard NiFi distribution due to size constraints, but you can get the latest version (1.11.4) here, or change the URL to match your NiFi version.
#jonayreyes You can find information about how to get queue data from NiFi API Here:
NiFi Rest API - FlowFile Count Monitoring

Active MQ get count number of messages consumed/produced per second

Is there any way in activemq with which we can get count number of messages
consumed/produced per second/minute at the broker end?
I have tried JMeter configuration using http://activemq.apache.org/jmeter-performance-tests.html but there is hardly any performance matrix I can gather.
thanks
If you wanted to write this yourself then you should use JMX on your broker. The Broker MBean has "TotalEnqueueCount" and "TotalDequeCount" attributes. You can poll at specific intervals for those values and calculate yourself how many messages a second/minute/hour that your broker is being produced to or consumed from.
You'll need to make sure you have JMX setup on the broker side, of course. See here for more details on that: http://activemq.apache.org/jmx.html
to simply view total enqueue/dequeue stats, use jconsole or the web console
if you need to process it further (to calculate rates, etc), then you should do one of the following:
access stats programmatically using Java JMX APIs and gather/process over time
use a third party tool for monitoring (Cacti and Splunk can also help with this)
another option is to use Camel Dataset to simulate data routing and gather stats