Glassfish 3.1 Remote client connecting to JMS queue in cluster - glassfish

Glassfish 3.1.2
Ubuntu 12.04
I've created a cluster of two nodes and have a JMS queue.
I'm having issues trying to connect to this JMS queue using a remote standalone client.
The cluster JMS listener is on port 27676 and the queue is deployed to the cluster.
mq://Glassfish2:27676/,mq://Glassfish3:27676
When I connect using the code I'd use to connect to a stand alone instance the message is not received by the cluster.
I believe it is using the default 7676 port. When the IIOP port is changed to use port 23700 which is the one the cluster (DAS) is using I get a connection refused exception as it is trying to connect to localhost:27676. At least it's the right port.
WARNING: [C4003]: Error occurred on connection creation [localhost:27676]. - cause: java.net.ConnectException: Connection refused: connect
I've also updated the following values in node config file (domain.xml) to remove references to localhost. jms-host and node-host values.
I had this issue before with a stand alone instance and it was resolved by adding entries to the /etc/hosts file. However, this does not seem to resolve the issue.
I also have all server instance IPs in the hosts file.
Am I missing something very basic here?
Any help would be greatly appreciated.
Thanks

If you look log files under
${glassfish_home}/glassfish/nodes/cluster-name/instance-name/imq/instances/instance-name/log
folder, you will see that
master brockers does not match
Every your node has different master brockers, probably every node know its own brocker as master brocker..
I had the same error and after a few days find this..

Related

Connecting Celery to Azure GOV Redis

I'm trying to get Celery to connect to an Azure GOV Redis resource.
Found and have tried this answer How to setup Celery to talk ssl to Azure Redis Instance for the connection string.
When I used port 6380 I get a ('time-out') - "ERROR/MainProcess] consumer: Cannot connect to rediss://:#*.redis.cache.usgovcloudapi.net:6380/0: Error while reading from socket: ('timed out',)" ...So I added BROKER_CONNECTION_TIMEOUT and TRANSPORT OPTIONS to try and give me a different result but no change.
My last thought was to try 6379 which I believe is the port that azure suggests... with this I get "Cannot connect to rediss://:#*.redis.cache.usgovcloudapi.net:6379/0: Error 111 connecting to ***.redis.cache.usgovcloudapi.net:6379. ECONNREFUSED"
I know GOV is more restrictive and therefore different than normal Azure so I was wondering if anyone has been able to connect this way successfully, thanks!

Using Redis in an express service running in minikube

I've got an express service running in a minikube cluster and I'm trying to set up a Redis client, but when I try run the service with the Redis client created it basically stalls on deployment and times out. As soon as I add the line:
const client = redis.createClient('http://127.0.0.1:6379');
My service will not deploy and run (even running the default with no supplied address causes the same issue).
I'm quite new to Kubernetes in general so I'm not sure if this is potentially an issue with minikube? Like trying to create a client from inside the cluster with that address isn't possible or something along those lines..
I'm completely lost with why just trying to create a client is causing this issue so any advice or direction would be greatly appreciated.
Try using "service-name.namespace-name.svc.cluster.local" instead of IP address to connect to service.
For example: If my service name is car-redis-service and namespace is default then the command goes like
redis.createClient(REDISPORT, redis://car-redis-service.default.svc.cluster.local)
Or
redis.createClient(REDISPORT,car-redis-service.default.svc.cluster.local)
(source)
Here REDISPORT is the port where redis is configured.
For more information on redis in kubernetes refer to this article.

Stack Exchange Connecting to Redis Cluster Connection error

I am trying to connect our asp.net application through Stack exchange client to Redis Cluster, but I am getting an connection error shown below :
No connection is available to service this operation:
I am using the connection string :
< add key="SearchCacheRedisConnectionString" value="IP:6379,IP:6379,connectTimeout=1000,abortConnect=false,ConnectRetry=3,syncTimeout=500,keepAlive=180" />
I have used the same connection string to connect to a standalone redis instance and everything works perfectly.
Its only when i try to connect to a cluster ( 3 master 3 slave ) architecture that i am getting a connection error.
Is there a different connection string i am supposed to use to connect to a Redis Cluster or is there any specific changes i am supposed to make in my code to connect to a Cluster.
Any help will be much appreciated. Thank you
Could your connectTimeout be too low? The StackExchange.Redis default is 5000

Activemq stops working - activemq/zookeeper setup

I've configured 3 zookeepers and 3 activemq instances in 1 cluster.
Scenario
3 activemq instances with only 1 master and other two is slave.
all 3 activemq instances are running, i.e. sudo service activemq status returns running but checking the logs, 1 instance(activemq1) is currently waiting for other cluster members, 1 instance(activemq2) stops, 1 instance(activemq3) has error. Assumming that we only require two instance to elect master, this setup should be able to run successfully .
two activemq instances should be running
zookeeper instances are running fine.
Issue
Below are the stacktraces of the respective activemq instances. Based on my understanding, it needs at least two properly running activemq intances for the cluster to nominate a master instance. Given that all activemq instanes produces running when issued with sudo service activemq status , I'm assuming there is an issue inside each activemq instances - refer to below stacktraces. Now, I noticed on logs, that activemq1 only fails to be properly running since other activemq instances failed internally. Notice the stacktrace on activemq2, it's stucked after it successfully connected to zookeeper and activemq3 has issue, I still need to figure out. The issue is fixed when I restarted activemq2 and activemq3. However, I can't be sure this won't happen again, thus this question.
activem1 show the below stacktrace, which I assume that this is because the other 2 activemq instances are running but has errors
Session establishment complete on server 10.5.4.111/10.5.4.111:2181, sessionid = 0x1582db00708000c, negotiated timeout = 4000
Not enough cluster members connected to elect a master.
Not enough cluster members connected to elect a master.
Not enough cluster members connected to elect a master.
activemq2 has the below stacktrace, which is the one I don't understand. It has stopped after successful connection to zookeeper, which should be detected by other activemq instances belonging to cluster-activem1 and activemq3
Opening socket connection to server 10.5.4.111/10.5.4.111:2181
Socket connection established to 10.5.4.111/10.5.4.111:2181, initiating session
Session establishment complete on server 10.5.4.111/10.5.4.111:2181, sessionid = 0x1582db00708000d, negotiated timeout = 4000
activemq3 has the below stacktrace
org.apache.jasper.servlet.JspServletWrapper.handleJspException(JspServletWrapper.java:568)[apache-jsp-8.0.9.M3.jar:2.3]
Configuration for activemq
the previous config here is with 2s zkSessionTimeout - which is the default. I made it to 4s as per googled to maximize the time needed for an activemq instance registers itself to zookeeper.
<persistenceAdapter>
<replicatedLevelDB
directory="${activemq.data}/leveldb"
replicas="3"
bind="tcp://0.0.0.0:61619"
zkAddress="zookeeper_addresses_here"
hostname="activemq_hostname_here"
zkSessionTimeout="4s"
/>
</persistenceAdapter>
Configuration for zookeeper
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/my/data/dir
clientPort=2181
server.1=activemq1_privateIP:2888:3888
server.2=activemq2_privateIP:2888:3888
server.3=activemq3_privateIP::2888:3888
autopurge.purgeInterval=24
autopurge.snapRetainCount=5
Zookeeper version 3.4.9
ActiveMQ version 5.13.4
Setup via Opswork
The attribute "directory" master-slave mq is need to refer to the same folder

MQ With WLS Foreign Server

I am facing two issues when i try to connect to MQ which is deployed on a Remote Server from Weblogic Server(WLS) by creating a Foreign Server.
1. When I try to connect to MQ Queuemanager in Bindings mode(after importing the .Bindings file) i keep getting the below error in WLS console:
java.lang.UnsatisfiedLinkError: no mqjbnd05 in java.library.path
If i Switch the Transport to Client i keep getting:
JMSWMQ0018: Failed to connect to queue manager '' with connection mode 'Client' and host name 'localhost'. Check the queue manager is started and if running in client mode, check there is a listener running. Please see the linked exception for more information.
Has anyone seen this, and are there any performance implications which dictate the use of client over bindings and vice versa?
TIA
Finally i was able to resolve this, i had to recreate the .bindings file in the client mode, with changes to the IVTsetup.bat which is most likely present in
C:\Program Files\IBM\WebSphere MQ\java\bin, I had to run this
def qcf(psQCF) TRANSPORT(CLIENT) HOST(SMEKA) PORT(1415) CHANNEL(ps_SRV_CHANNEL) QMGR(psQM)
to generate the .bindings file.
Refer to this link for more details:
http://publib.boulder.ibm.com/infocenter/wbihelp/v6rxmx/index.jsp?topic=/com.ibm.wbia_adapters.doc/doc/peoplesoft/peopleso103.htm
Where the question states that I try to connect to MQ which is deployed on a Remote Server from Weblogic Server I assume this means that WLS and WMQ are on two different hosts. If that is the case, then a bindings mode connection (which relies on shared memory segments) won't work.
The client mode connection appears to be using a CF that is pointed to localhost rather than the IP or hostname of the WMQ server. This would work for an application on the same host as the queue manager but not when the app and QMgr are on separate servers.
As far as choosing between client and bindings mode, the answer is that if the QMgr is local use bindings. This provides highest reliability, best performance and XA transactionality. When using client mode, two-phase XA commit is not supported without the Extended Transactional Client. Per the JMS specification, there is an ambiguity that can exist if an app loses the connection during a COMMIT call. Depending on how the app handles this it's possible to end up with duplicate messages. (The JMS spec refers to these as "functionally duplicate.") This ambiguity is much less likely to occur with a bindings mode connection since there is no network latency and not even any traversal of the IP stack or interface. So use bindings mode where possible.
UPDATE:
Removed note about Extended Transactional Client being a chargeable component. As of April 24th, XTC is free of charge for all versions of WMQ on all platforms.