SAP HANA Vora distributed log service refused to start - hana

I installed SAP HANA Vora on a 3 node MapR cluster. While trying to bring up Vora service via Vora Manager UI, I get the following error:
Error occurred while starting all services: vora-dlog refused to
start. Cannot continue Start All Jobs. Error: There are no health
checks registered for service vora-dlog.
The vora-manager log file displays the following error:
vora.vora-dlog: [c.xxxxxxx] : Error while creating dlog store.
nomad[xxxxx]: client: failed to query for node allocations: no known servers
nomad[xxxxx]: client:rpcproxy: No servers available.
All 3 nodes in the cluster have 2 IPs in different subnets. Can anyone suggest how to configure a health check for consul? And what else can be wrong here?

The messages from the VoraMgr log file are not sufficient to understand the actual problem. Are there other messages from dlog before 'Error while creating dlog store.'? I have seen that message e.g. if the disk was full and the dlog could not create its local persistency.
Also, the 2 different networks could cause an issue like you described. You can configure the use of different network interface names on different nodes. However, on each node all Vora services as well as the Vora Manager must use the same network interface name. If using 2 different subnets the configuration must allow network traffic between them. Could you give some additional info on your topology + network configuration?

Related

Gridgain console load balance

I have Gridgain three node cluster and also running Gridgain web console agent and web console on all three nodes. It is all hosted on Windows Server.
I would like to load balance My web console. The problem is I don't know how to share user registration database which it stores in a work directory. Can I use external database to store all that information so that my cluster uses the same database?
There is a problem with Web Console Agent as well. How do I share tokens stored in default.properties?
There is no definitive guide on how to create a cluster for web console for high availability.
Can someone please guide me on how can I form a cluster for a Web console sharing its user store and tokens?
Thanks
If you are looking for multi-cluster support, take a look at documentation:
https://www.gridgain.com/docs/web-console/latest/multi-cluster-support
If you are looking for agent fault-tolerance: just start several agents. Fisrt agent will process all messages, other will be in the hot-stand-by mode.
If you are looking for connection fault-tolerance between agent and cluster (if cluster node failed that is a connection point for agent, Web Console will loose connection to cluster), just specify several nodes addresses as comma-separated list for "node-uri" parameter (in default.properties or as command-line argument).
For example:
node-uri=http://192.168.0.1:8080,http://192.168.0.2:8080;http://192.168.0.3:8080
Hope this helps.

How to setup multiple gemfire/geode WAN clusters on one machine for testing?

What's needed to run multiple gemfire/geode clusters on one machine? I'm trying to test using WAN gateways locally, before setting it up on servers.
I have one cluster (i.e. gemfire.distributed-system-id=1) up and running with one locator and one server.
I am trying to setup a second cluster (i.e. gemfire.distributed-system-id=2), but receive the following error when attempting to connect to the locator in cluster 2:
Exception caused JMX Manager startup to fail because: 'HTTP service
failed to start'
I assume the error is due to a JMX Manager already running in cluster 1, so I'm guessing I need to start a second JMX Manager on a different port in cluster 2. Is this a correct assumption? If so, how do I setup the second JMX Manager?
Your assumption is correct, the exception is being thrown because the first members started some services (PULSE, jmx-manager, etc.) using the default ports already
You basically want to make sure the properties http-service-port and jmx-manager-port (non an extensive list, there are other properties you need to look at), are different in the second cluster.
Hope this helps.
Cheers.

Get Number of connection from all host to my activemq broker

ActiveMQ broker setup:
Broker is running on machine: hostA
Clients from different host can connect to my broker instance running on hostA, there can be any number of client from any host.
Is there a way to find out how many clients are connected to broker and also list which tell me how many connection from each host is there to my broker.
I want to do this without making assumption about number of hosts.
I can do this by using lsof command and some parsing over output, but I am in situation where I can not use this.
Is there any feature provided by ActiveMQ command line utility activemq-admin.
You can get to pretty much any Mbean attribute ActiveMQ exposes via the activemq-admin. There are no attributes or operations that give you a quick count of connections from specific clients. You will have to do some work on your end to get all the details you want, but all the raw data is there.
Examples:
Broker Stats:
activemq-admin query --objname type=Broker,brokerName=localhost
Connection Stats
activemq-admin query --objname type=Broker,brokerName=localhost,connector=clientConnectors,connectorName=<transport connector name>,connectionViewType=clientId,connectionName=*
See full doc here.
NOTE: Documentation as of this writting has not be updated to take into account the Mbean changes made in AMQ. References to Object names in examples are not correct.
You can get the object name (or example sytax) from JMX (using jconsole or visual vm for example) from the MBeanInfo. Each object name wills stat something like org.apache.activemq:type. For the script, remove the "org.apache.activemq:" and you should be in business for any thing you need from JMX via the script.
I think you may also look into using Jolokia with your broker. Although not compatible with the activemq-admin script, you can reach everything you can from the activemq-admin script, but also have access to all of the operations. In the past I've heavily used the activemq-admin script for local monitoring/command line administration of the broker, but have started converting everything to hit the Jolokia service. But again, activemq-admin will give you a way to access what you are looking for here.

MQ With WLS Foreign Server

I am facing two issues when i try to connect to MQ which is deployed on a Remote Server from Weblogic Server(WLS) by creating a Foreign Server.
1. When I try to connect to MQ Queuemanager in Bindings mode(after importing the .Bindings file) i keep getting the below error in WLS console:
java.lang.UnsatisfiedLinkError: no mqjbnd05 in java.library.path
If i Switch the Transport to Client i keep getting:
JMSWMQ0018: Failed to connect to queue manager '' with connection mode 'Client' and host name 'localhost'. Check the queue manager is started and if running in client mode, check there is a listener running. Please see the linked exception for more information.
Has anyone seen this, and are there any performance implications which dictate the use of client over bindings and vice versa?
TIA
Finally i was able to resolve this, i had to recreate the .bindings file in the client mode, with changes to the IVTsetup.bat which is most likely present in
C:\Program Files\IBM\WebSphere MQ\java\bin, I had to run this
def qcf(psQCF) TRANSPORT(CLIENT) HOST(SMEKA) PORT(1415) CHANNEL(ps_SRV_CHANNEL) QMGR(psQM)
to generate the .bindings file.
Refer to this link for more details:
http://publib.boulder.ibm.com/infocenter/wbihelp/v6rxmx/index.jsp?topic=/com.ibm.wbia_adapters.doc/doc/peoplesoft/peopleso103.htm
Where the question states that I try to connect to MQ which is deployed on a Remote Server from Weblogic Server I assume this means that WLS and WMQ are on two different hosts. If that is the case, then a bindings mode connection (which relies on shared memory segments) won't work.
The client mode connection appears to be using a CF that is pointed to localhost rather than the IP or hostname of the WMQ server. This would work for an application on the same host as the queue manager but not when the app and QMgr are on separate servers.
As far as choosing between client and bindings mode, the answer is that if the QMgr is local use bindings. This provides highest reliability, best performance and XA transactionality. When using client mode, two-phase XA commit is not supported without the Extended Transactional Client. Per the JMS specification, there is an ambiguity that can exist if an app loses the connection during a COMMIT call. Depending on how the app handles this it's possible to end up with duplicate messages. (The JMS spec refers to these as "functionally duplicate.") This ambiguity is much less likely to occur with a bindings mode connection since there is no network latency and not even any traversal of the IP stack or interface. So use bindings mode where possible.
UPDATE:
Removed note about Extended Transactional Client being a chargeable component. As of April 24th, XTC is free of charge for all versions of WMQ on all platforms.

Why are my WebLogic clustered MDB app deployments in warning state?

I have a WebLogic cluster on which I've deployed numerous topics and applications that use them. My applications uniformly show themselves in a Warning status. Looking at Monitoring on the deployment, I see the MDB application connects to Server #1, but on server #2 it shows this:
MDB application appName is NOT connected to messaging system.
My JMS Server is targetted to a migratable target, which is in turn targetted to the #1 server and has a cluster identified. And messages sent to either server all flow as expected. I just don't know why these deployments show in a Warning state.
WebLogic 11g
This can be avoided by using the parameter below
<start-mdbs-with-application>false</start-mdbs-with-application>
In the weblogic-application.xml, Setting start-mdbs-with-application to false forces MDBs to defer starting until after the server instance opens its listen port, near the end of the server boot up process.
If you want to perform startup tasks after JMS and JDBC services are available, but before applications and modules have been activated, you can select the Run Before Application Deployments option in the Administration Console (or set the StartupClassMBean’s LoadBeforeAppActivation attribute to “true”).
If you want to perform startup tasks before JMS and JDBC services are available, you can select the Run Before Application Activations option in the Administration Console (or set the StartupClassMBean’s LoadBeforeAppDeployments attribute to “true”).
Refer :http://docs.oracle.com/cd/E13222_01/wls/docs81/ejb/message_beans.html
this is applicable for the versions till 12c and later
I don't like unanswered questions, so I'm going to answer this one.
The problem is resolved, though I was not involved in its resolution. At present the problem only exists for the length of time it takes the JMS subsystem to fully initialize. During that period (with many queues, it can take a while) the JNDI system throws errors and the apps are truly in warning state. Once the JMS is fully initialized, everything goes green.
My belief is that someone corrected something in the JMS Server / Cluster config. I'll never know what it was.