How to check leader node in schema registry 7.X version? - confluent-schema-registry

I want to check the leader node of schema registry application in newer version 7.x.I older version, schema registry will be connecting to zookeeper from where we can get the leader node using get /schema_registry/schema_registry_master. But in newer version, schema registry is connecting to kafka brokers directly. In that way, how can we check the get the leader node from schema registry?
I didnt get any option in zookeeper when i was try to get the leader node.

Related

Clone an existing MobileFrist Platform Foundation (MFP) V8 environment

How do I create a clone of a MFP V8 environment? The clone will have the same topology, with foundation server and database Oracle server on a new host. We have three oracle databases for MFP Core, Admin and App Center. Then we have our major Analytics file based database. We also plan to apply the latest ifix on the foundation server and the underline Liberty server.
High level steps for moving the runtime would be
Install Oracle on new host.
Move data from current DB to new DB.
Install Liberty on new host.
Install MFP on new host.
Run Server configuration tool or ant tasks to configure the runtime
environment (point to the new liberty and new Oracle instance)
Restart liberty server.
Please note the important update in MFP8.0 IFIX release notes here -
https://mobilefirstplatform.ibmcloud.com/blog/2017/03/09/8-0-ifix-release/
Regarding analytics data movement, please consider the following points
In an analytics Cluster, the indivdual nodes stores data under a
folder configured via jndi property called analytics/datapath - By
default on every node it is under ./analyticsData directory. (within the
MFP Analytics location)
If you are moving data from an one Cluster to a new Analytics
Cluster, make sure the cluster has same number of nodes.
Copy of analytics data should follow below pattern (make sure the
servers nodes are stopped while copying to avoid any lock files in
place)
Node-1 of Old Analytics Server ---> Node-1 of New Analytics Server
Node-2 of Old Analytics Server ---> Node-2 of New Analytics Server
... and so on...
Make sure the ./analyticsData directory on the new analytics cluster
nodes are empty and the nodes are stopped.
Make sure the analyticsData copied to the new machine holds the same
directory structure as before in the previous cluster nodes. (By default
analyticsData directory contains a directory called worklight) Keep the
structure same.
Make sure the analytics JNDI properties are set as similar to the old
machine (node IPs can be changed thats fine according to new machines) -
especially the analytics server.xml
Start the new analytics nodes and verify the data. (make sure on the
analytics console, the date filter is set corrrectly to view the data).
Important Note: Make sure a backup of all the node-wise analyticsData is
kept safe. Use of tar.gz is better for copy of data from node to node.

Unable to connect to local RabbitMQ on Windows 10

I've installed RabbitMQ (latest version downloadable from RabbitMQ website) on my Windows 10 machine. It installed with ERlang 19.1.
I'm trying to install RabbitMQ Web UI Management Tools using the following command (using RabbitMQ Command Prompt):
rabbitmq-plugins enable rabbitmq_management
I'm getting the following error:
The directory name is invalid.
The filename, directory name, or volume label syntax is incorrect.
The filename, directory name, or volume label syntax is incorrect.
Plugin configuration unchanged.
Applying plugin configuration to rabbit#[0x7FF9A8527044]... failed.
* Could not contact node rabbit#[0x7FF9A8527044].
Changes will take effect at broker restart.
* Options: --online - fail if broker cannot be contacted.
--offline - do not try to contact broker.
I've looked up on SO and tried stopping and restarting, overriding erlang cookie, but nothing helps.
I think there's a problem with RabbitMQ itself. The service itself is marked as started, but if I try to telnet the default port (5672) then it fails (it's not a firewall issue - I've disabled it).
Also I don't see an log files created for RabbitMQ or any related Event Logs messages. So it's hard to diagnose exactly the problem.
I also tried uninstalling and re-install both erlang and RabbitMQ. Still didn't help.
How do I further diagnose the problem?
Found a solution to the problem (downgrading Erlang did not work in my case, but just in case I left it on Erlang 18 in case there were other issues with ver 19).
What puzzled my eye was this line: Applying plugin configuration to rabbit#[0x7FF9A8527044]... failed.. Seems like it's trying to connect to rabbit instance at a wrong machine name.
I then ran rabbitmqctl.bat status which failed but again showed that it's trying to connect to [0x7FF9A8527044] while the node name was rabbit#my-mchine-name. So I started reading the configuration section at RabbitMQ website and the solution was simple - setting the node name manually.
All I had to do is add an environment variable named RABBITMQ_NODENAME with the node name being rabbit#localhost. And that's it. Problem solved!
you may be running into issues with Erlang 19 incompatibility. there has been some history of Erlang 19 support problems with RMQ. Try installing Erlang 18 instead.
If that fails, I would recommend using Docker for Windows and installing / running RabbitMQ in that. I've moved all my services like RabbitMQ, MongoDB, etc. into Docker containers and it's made my life as a dev so much simpler.
In my case I had to trash the local account config located at : %APPDATA%\RabbitMQ\.
Deleting the entire folder and reinstalling the service did the trick.
Rabbitmq 3.6.14
Erlang 20.1 OTP

Unable to Start RabbitMQ Due to Inconsistent Node

I'm attempting to start a RabbitMQ node that was disconnected due an error in setup.
Now I'm unable to start the node because of the inconsistent node error. Reading online, all arrows point to a mnesia directory for node info, but this directory does not exist on my server.
How can I force a node to forget node configuration it the service doesn't start?
My problem was the persistence at which my node was retaining the last known connections.
I have to do delete the data in my node's data partition in order for it to forget it's last known connections.
Once deleted I was able to start the node isolated then join it as ram to the disk node.

CouchBase 2.5 2 nodes in replica: 1 node fail: the service is no more available

We are testing Couchbase with a two node cluster with one replica.
When we stop the service on one node, the other one does not respond until we restart the service or manually failover the stopped node.
Is there a way to maintain the service from the good node when one node is temporary unavailable?
If a node goes down then in order to activate the replicas on the other node you will need to manually fail it over. If you want this to happen automatically then you can enable auto-failover, but in order to use that feature I'm pretty sure you must have at least a three node cluster. When you want to add the failed node back then you can just re-add it to the cluster and rebalance.

Datastax OpsCenter not showing nodes

I installed datastax enterprise in my win7 system,but it is not displaying any node in opscenter dashboard.(Actually I have re-installed the datastax due to some issue in previous installation.)
I am getting the node detail in command line using nodetool command,but no node is present in the datastax ops center dashboard.
I think OpsCenter agent is failing to connect the node.
Please help me
Thanks,
Subhra
The agent might not be started on your system in linux its in /usr/share/datastax-agent/bin run the 'install_agent'.
Also check if the ports for running opscenter are not blocked.
Follow below mentioned procedure :
1) Check datastax-agent is installed on nodes and also service is running.
2) Check Port connection is open for datastax-agent.
http://docs.datastax.com/en/archived/opscenter/5.1/opsc/reference/opscPorts_r.html
3) Reconfigure your existing Cluster details in Opscenter, after deleting previous configuration in Opscenter.
4) If issue still exist check log file of opscenter (opscenterd.log)