Monitoring IBM AIX servers - aix

We have AIX servers and we want to monitoring metric data from kibana. The elastic beats does not support AIX servers. So is there any way we send metric related data to ELK so that we can do monitoring. We can't use syslog tools , we are not sending logs we just want to send metric related data. We can install some end point there in AIX then we can monitor.

Related

HiveMQ/RabbitMQ as load balancing MQTT node(s) before Thingsboard IoT system

Our endpoint devices are pushing data over MQTT to an IoT system based on the Thingsboard IoT platform. There is only one MQTT topic called /telemetry where all devices connect. The server knows which device the data belongs to based on the device's token used as the MQTT username.
Due to not rare peaks of data loading, outages happen.
My question is:
Is it possible and how to use HiveMQ (RabbitMQ or some similar product) between devices and our IoT system to avoid data loss and smooth out peaks?
This post explains how to use Quality of Service levels, offline buffering, throttling, automatic reconnect and more to avoid data loss and maintain uptime.
The tldr; is that MQTT and HiveMQ have features built in to help avoid data loss, guaranteed delivery, traffic spikes and to handle back-pressure.
It may be worth considering what you can do with your existing tools before expanding your deploy footprint which just adds unnecessary complexity if unwarranted.
I would recommend using Apache Kafka or Confluent in between the MQTT Broker & ThingsBoard. Kafka stores all data on disk (instead of RAM in the case of RabbitMQ) and is scalable among multiple cluster nodes. You could also reload data to ThingsBoard by resetting offsets. This could be useful if there was an error in the configuration of a rulechain and you would have ThingsBoard reprocess the data again.
To connect with Kafka/Confluent you can use the ThingsBoard Integration.
Find more details here:
https://medium.com/python-point/mqtt-and-kafka-8e470eff606b

How to check logs between OPC Publisher and IoT Hub to confirm the data transfer

I have setup IoT Edge up in one of our machines and installed OPC Publisher and connected it to one of our opc-ua servers which then sends data to OPC Publisher and then to IoT Hub. We have not received any data to our IoT hub for the last 10 days and suddenly today we have received the data. How can we troubleshoot why the data is missing for the last 10 days?
You can generate a support bundle on your edge device that will collect the logs of all deployed modules as well as the edge runtime logs.
sudo iotedge support-bundle --since 11d
More details on troubleshooting IoT Edge here
You can first look into the logs of the publisher and validate if the connection to the OPC UA Server was/is active. If this is fine then have a look into the edgeHub and validate if the upstream connectivity to IoT Hub was affected.
One of the most powerful tools to monitor your edge deployments is the integration with Azure Monitor. It will collect metrics from the edgeHub and edgeAgent, which combined will give you an overview of where your messages are going. It can show you how many messages are sent to your upstream endpoint and when.
Source of image
For a full overview of the capabilities, you can check out this blog post. Installation steps are here
Edit:
OPC Publisher aLso supports diagnostic logging, which will give you more information about the connections to OPC servers. To do this, you need to set the diagnostic interval. You can do this by specifying the --di command argument in your createOptions:
"OPCPublisher":{
"settings":{
"image":"<image>",
"createOptions":{
"Cmd":["di=60"]
}
},
"type":"docker",
"version":"1.0",
"status":"running",
"restartPolicy":"always"
}
The example above will log diagnostic metrics every 60 seconds. You can then upload the logs using the support bundle command from Cristian's answer, or use the UploadSupportBundle direct method to do the same without needing access to the device.

How can I setup Redis Cluster mode or master slave mode in PCF?

This is regarding the use case where we are trying to use the Redis in PCF (Pivotal Cloud Foundry). In our use case, we will refresh the Redis cache daily once or twice with the required data and then API will query Redis and then provide the response.
One thing of particular concern for us is that we want API queries to happen from Redis only that means Redis to be available at all times. But whenever we are refreshing the Redis DB, Redis would not be able to serve the APIs since it is refreshing the keys. To avoid that we wanted to setup a Redis in cluster mode or master-slave mode so if one instance is being written another can be read from.
How can we setup Redis cluster or master-slave mode in PCF and then fulfil our requirement?
Please provide any other suggestions as well that you may have.
At the time I write this, the Redis for Pivotal Platform product does not support clustering. See Availability, in the docs here -> https://docs.pivotal.io/redis/2-3/erc.html#offerings.
All Redis for Pivotal Platform services are single VMs without clustering capabilities. This means that planned maintenance jobs (e.g., upgrades) can result in 2–10 minutes of downtime, depending on the nature of the upgrade. Unplanned downtime (e.g., VM failure) also affects the Redis service.
Redis for Pivotal Platform has been used successfully in enterprise-ready apps that can tolerate downtime. Pre-existing data is not lost during downtime with the default persistence configuration. Successful apps include those where the downtime is passively handled or where the app handles failover logic.
If you require clustered Redis, you'd need to look at a different offering. Redis Labs has some offerings that integrate with PCF, you could use a Cloud Provider's Redis offering, or you could host your own.
If the solution you use isn't integrated into PCF, you can create a user-provided service with cf cups and provide the Redis credentials to your application that way. It will function just like a Redis service instance created through the marketplace.

Free device logs for analysis

I am setting up HOME SIEM lab using SPLUNK. I am looking for sources which can provide different logs for various devices but not limited for below ones.
Windows Logs
IIS Logs
IDS/IPS Logs
Based on the logs i am planning to build search queries for various events and further using the same to build the rules.
It is not clear why you need logs when you can generate these? For example you can set up a VM with Windows Server and install an agent like NXLog (or any log collection agent that can send logs forwarded via TCP, UDP, TLS, or HTTP) for log collection to Splunk.
Checkout the Montgomery County Data Portal. It's free
https://data.montgomerycountymd.gov/
You could also connect to a crypto exchange API and have lots of data flow in real-time

Kafka or Redis broker on embedded device

I would like to understand the implication of running a Kafka broker on a embedded device that has the spec 2GH and 4GB of memory. I am also interest in other's personal experience of running Redis or Kafka on a stringent hardware requirement.
I've been playing around a setup that logs the data from sensors on a Raspberry PI device, the logs is push out to a web client that display these data real-time. It's easier to use a broker such as Redis or Kafka to facilitate with the data stream but, I am unsure about the performance of running it on embedded device.
In short, how is it like to run a Redis or Kafka broker on a embedded device in terms of performance. By the way, this is done on a Python env.