Persistent Data Migration from hivemq3.4.5 to hivemq4.8.1 via script instead of Web UI - migration

We are trying to migrate from hivemq version 3.4.5 to hivemq version 4.8.1
While exploring hivemq docuemnts, seems like data migration can be done via WebUI
https://www.hivemq.com/docs/hivemq/4.8/upgrade/4-0-to-4-1.html#persistent-data-migration
But due to security and other reasons, we would not like to expose the WebUI and are looking for a solution which may be script based or any other alternative approach that can be used for both export of data from older hivemq to import of data to higher hivemq.
Any suggestions would be deeply appreciated.
Thanks in advance.

Related

scyllabd ui access for querying data

I have scyllaDB installed in cloud. I want to run queries and check the data. Is there any way to access it by any desktop UI client or does by default it provides any web UI to access it
Thanks
The typical interface with ScyllaDB is cqlsh which is command line. Documentation is at https://docs.scylladb.com/getting-started/cqlsh/
There are a few GUIs that claim to be front-ends for Cassandra. These should work for Scylla, but I've never used them.

Automated Testing of Nifi flows using Jenkins

Is there any way to automatically run regression/functional tests on Nifi flows using Jenkins pipeline ?
Searched for it, without any success.
Thanks for your help.
With the recent release of NiFI-1.5.0 and NiFi-Registry-0.1.0, the community has come together to produce a number of SDLC/CICD integration tools to make using things like Jenkins Pipeline easier.
There is both Python (NiPyAPI), and Java (NiFi-Toolkit-CLI) API wrappers being produced by a team of collaborators to allow scripted manipulation of NiFi Flows across different environments.
Common functions include interaction with integrated version control, import/export of flows as JSON documents, deployment between environments, start/stop of flows, etc.
So, we are working quickly towards supporting things like an integrated wrapper for declarative Jenkins Pipelines, and I would add it is being done fully in public codebase under the Apache license, so we (I am the lead NiPy author) would welcome your collaboration.

Better jmeter report

Currently I use jmeter aggregate report or summary report for submitting reports. But they expect something extra.. How can I give. Is there any plugins for getting server resources usage when testing load.
Reporting: since JMeter 3.0 there is a HTML Reporting Dashboard which can be generated during the test run. It contains exhaustive overview information. If you need to find out the reason of the bottleneck or memory leak or whatever you can consider extra Graphs available via JMeter Plugins project.
The same JMeter Plugins project provides PerfMon - client-server application which is able to collect over 70 different metrics and plot them via JMeter Listener. See How to Monitor Your Server Health & Performance During a JMeter Load Test guide for detailed setup and usage instructions.
There are quite a few plug-ins available that can help you analyze the results better. You can refer to https://jmeter-plugins.org/ for the same.
Most popularly used ones are:
Response Times Over Time
Response Times Percentiles
Transactions per Second
Response Latencies Over Time
In case of server usage you can use following that comes with JMeter plug-ins
PerfMon Metrics Collector and Server Agent or
In case of Unix based system use sar command that comes with sysstat package or VMstat. In case of windows based system use Perfmon to capture the system utilization data while the test is running and then use Ksar to plot graphs with the data collected using sar. https://sourceforge.net/projects/ksar/
If you have collected data using Perfmon then plot the graphs using PAL. https://pal.codeplex.com/
In this case, I would suggest using Grafana. It shows realtime results. And the best thing is, it can be configured according to the need.
Now, the thing is how to use it? Using it is not that tough.
If you're using a Mac or Linux (Any Flavour) things become easy. If you're using Windows, I would suggest using a virtual machine. The reason behind that is windows block traffic after some requests. And that causes a lot of pain in the head.
In my case, I used a virtual machine to setup ubuntu inside it and then configured Grafana.
For working with Grafana, you need to have these two things installed.
Grafana Itself
Influx Db for the backend
Links for both here below:
https://grafana.com/grafana/download?platform=linux
https://portal.influxdata.com/downloads/
Once installed and setup,
You need to use Backen Listener to push results o Graphite Client (Installed along with Influx DB Automatically).
I know it is a bit confusing but once you understand the thing, you and your client will love the detailed reports.
Remeber, Grafana is all about configuration.
Let me know if you have any confusion regarf=ding this.
Happy to help. :)

HiveServer versus HiveServer2

I know that HiveServer doesnot support multi-client concurrency and authentication and is handled in HiveServer2.
I want to know how this is handled in HiveServer2 and why it doesnot support in HiveServer.
Thanks,
Sree
The answer for this question is simple, which I got to know few days ago.
Each and every client has to be connected through THRIFT API in hiveserver or hiveserver2 which in turn launches the process to convert the client code to hive understandable code by launching language specific class libraries.
As everyone is aware, a process can be single or multi threaded. In hiveserver1, the process launched is single threaded as the class libraries doesnot support multi threads. In hiveserver2, these have been upgraded to multi thread class libraries and thus supports multiple sessions.
Related to security, please refer the link below
http://blog.cloudera.com/blog/2013/07/how-hiveserver2-brings-security-and-concurrency-to-apache-hive/
Thanks,
Sree

Infrastructure monitoring using collectd with Graphite and Grafana

I am using collectd to gather metrics for system performance and MySQL and display in Grafana. I have done it now and I want to monitor the web server and services. I am facing some issues. Is there any other way to monitor them?
If you're willing to enable mod_status, collectd can scrape that page to provide web server metrics:
https://collectd.org/wiki/index.php/Plugin:Apache
Just to elaborate, you can use the collectd MySQL plugin for the storage of data in MySQL. Please see https://collectd.org/wiki/index.php/Plugin:MySQL
However, I would recommend a more robust solution using InfluxDB as the datastore software since it is designed to work with high I/O loads.
Check out this guide, was very useful to me: https://www.linkedin.com/pulse/monitoring-collectd-influxdb-grafana-alan-wen (no, I'm not the author.)