ApacheIgnite TcpDiscoveryKubernetesIpFinder fails in Azure Kubernetes cluster in a vNet - ignite

vNet IP Address space: 10.106.8.0/22
Apache Ignite version: 2.9.1
Kubernetes version: 1.19.7
Service CIDR: 10.0.0.0/16
DNS Service IP: 10.0.0.10
Docker bridge CIDS: 172.17.0.1/16
We deployed a AKS cluster in a vNet after that deployed Apache Ignite cluster 2.9.1. The sqlline.sh and thin client (dotnet) was able to connect using port 10800 and internal load balancer IP Address. But, the server node (clientmode=true, dotnet thick client) wasn't able to connect (xml config file attached) below is the error.
Any help to resolve the issue is much appreciated.
[Error] [org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi] Failed to get registered addresses from IP finder on start
After the above, below error repeats forever...
[06:04:20] [Error] [org.apache.ignite.internal.util.typedef.G] Blocked system-critical thread has been detected. This can lead to cluster-wide undefined behaviour [workerName=tcp-client-disco-msg-worker, threadName=tcp-client-disco-msg-worker-#4-#35, blockedFor=13s]
[06:04:20] [Warn] [] Possible failure suppressed accordingly to a configured handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, super=AbstractFailureHandler [ignoredFailureTypes=Unmodifia[72901-default-config.xml][1]bleSet [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext [type=SYSTEM_WORKER_BLOCKED, err=class o.a.i.IgniteException: GridWorker [name=tcp-client-disco-msg-worker, igniteInstanceName=null, finished=false, heartbeatTs=1614578647003]]]
[06:04:20] [Warn] [org.apache.ignite.internal.processors.cache.CacheDiagnosticManager] Page locks dump:
Apache Ignite server nodes were deployed in cohort-store k8s namespace...
kubectl get pods -n cohort-store
NAME READY STATUS RESTARTS AGE
cohortstore-0 1/1 Running 0 3d6h
cohortstore-1 1/1 Running 0 3d6h
cohortstore-2 1/1 Running 0 3d6h
kubectl -n cohort-store get svc
kubectl -n cohort-store get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cohortstore-load-balancer-internal LoadBalancer 10.0.113.146 10.106.8.255 8080:31417/TCP,10800:32719/TCP,10900:31208/TCP 29h
Apache Ignite client node (dotnet think client) was deployed in cohort-frontdoor k8s namespace...
kubectl get pods -n cohort-frontdoor
NAME READY STATUS RESTARTS AGE
cohortfrontdoor-665f99bb6b-tdl5z 1/1 Running 0 72m
Client XML SpringConfig file
<?xml version="1.0" encoding="UTF-8"?>
<!--
Configuration example with Kubernetes IP finder and Ignite persistence enabled.
WAL files and database files are stored in separate disk drives.
-->
<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="clientmode" value="true"/>
<property name="failureDetectionTimeout" value="5000"/>
<property name="clientFailureDetectionTimeout" value="10000"/>
<property name="cacheConfiguration">
<list>
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="ephi"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="writeSynchronizationMode" value="FULL_SYNC"/>
<property name="backups" value="0"/>
</bean>
</list>
</property>
<!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="networkTimeout" value="10000" />
<property name="localPort" value="47500" />
<property name="ipFinder">
<!--
Enables Kubernetes IP finder and setting custom namespace and service names.
-->
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
<property name="namespace" value="cohort-store"/>
<property name="serviceName" value="cohortstore-load-balancer-internal"/>
</bean>
</property>
<property name="socketTimeout" value="300" />
</bean>
</property>
<property name="communicationSpi">
<bean class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
<property name="slowClientQueueLimit" value="1000"/>
</bean>
</property>
</bean>
</beans>

The error went away after I did clusterRoleBinding (see below for more info). I thought, had to do this, since my Ignite server nodes and the think client nodes are in two different K8S namespaces & had two different service accounts.
But now, below is what I'm see in a big loop on the server nodes...
INFO: TCP discovery accepted incoming connection [rmtAddr=/10.106.8.32, rmtPort=43883]
Mar 02, 2021 12:02:44 AM org.apache.ignite.logger.java.JavaLogger info
INFO: TCP discovery spawning a new thread for connection [rmtAddr=/10.106.8.32, rmtPort=43883]
Mar 02, 2021 12:02:44 AM org.apache.ignite.logger.java.JavaLogger info
INFO: Started serving remote node connection [rmtAddr=/10.106.8.32:43883, rmtPort=43883]
Mar 02, 2021 12:02:44 AM org.apache.ignite.logger.java.JavaLogger info
INFO: Initialized connection with remote client node [nodeId=59a5ce6f-2d0d-4abb-aaf5-b2b9f51f7e44, rmtAddr=/10.106.8.32:43883]
Mar 02, 2021 12:02:44 AM org.apache.ignite.logger.java.JavaLogger info
INFO: Finished serving remote node connection [rmtAddr=/10.106.8.32:43883, rmtPort=43883
Mar 02, 2021 12:02:46 AM org.apache.ignite.logger.java.JavaLogger info
clusterrolebinding info
k describe clusterrolebinding cohortstore-RoleBinding
Name: cohortstore-RoleBinding
Labels: app.kubernetes.io/managed-by=Helm
Annotations: meta.helm.sh/release-name: cohortstore
meta.helm.sh/release-namespace: cohort-store
Role:
Kind: ClusterRole
Name: cohortstore-Role
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount cohortstore.com cohort-store
ServiceAccount cohort-frontdoor.com cohort-frontdoor

Related

Start ignite in persistent store mode and query using sqlline console

I am starting ignite db in persistent mode with following configuration -
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<!--
Alter configuration below as needed.
-->
<bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<!-- Enabling Apache Ignite Persistent Store. -->
<property name="dataStorageConfiguration">
<bean class="org.apache.ignite.configuration.DataStorageConfiguration">
<property name="defaultDataRegionConfiguration">
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="persistenceEnabled" value="true"/>
</bean>
</property>
</bean>
</property>
</bean>
</beans>
This configuration is in config/default-config.xml file
After that I am starting sqlline console using -
sqlline.sh --color=true --verbose=true -u
jdbc:ignite:thin://127.0.0.1/
In the console when I am executing any DDL query it gives following error -
java.sql.SQLException: Can not perform the operation because the cluster is inactive. Note, that the cluster is considered inactive by default if Ignite Persistent Store is used to let all the nodes join the cluster. To activate the cluster call Ignite.active(true).
at org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:749)
at org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:211)
at org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute(JdbcThinStatement.java:474)
at sqlline.Commands.execute(Commands.java:823)
at sqlline.Commands.sql(Commands.java:733)
at sqlline.SqlLine.dispatch(SqlLine.java:795)
at sqlline.SqlLine.begin(SqlLine.java:668)
at sqlline.SqlLine.start(SqlLine.java:373)
at sqlline.SqlLine.main(SqlLine.java:265)
How am I supposed to activate the cluster? where do I call Ignite.active(true)?
Note: I am using Gridgrain ignite community edition version 8.7.5 in ubuntu 18.0.4
The simplest way is to use the control.sh script, which you’ll find in the bin directory:
control.sh --activate
The other ways are documented.

Apache Ignite - Node running on remote machine not discovered

Apache Ignite Version is: 2.1.0
I am using TcpDiscoveryVmIpFinder to configure the nodes in an Apache Ignite cluster to setup a compute grid. Below is my configuration which is nothing but the example-default.xml, edited for the IP addresses:
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<!--
Ignite provides several options for automatic discovery that can be used
instead os static IP based discovery. For information on all options refer
to our documentation: http://apacheignite.readme.io/docs/cluster-config
-->
<!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
<!--<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">-->
<!-- <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder"> -->
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<property name="addresses">
<list>
<!-- In distributed environment, replace with actual host IP address. -->
<value>xxx.40.16.yyy:47500..47509</value>
<value>xx.40.16.zzz:47500..47509</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
If I start multiple nodes on individual machine, the nodes on respective machines discover each other and form a cluster. But, the nodes on the remote machines do not discover each other.
Any advise will be helpful...
First of all, make sure that you really use this config file and not a default config. With default configuration, nodes can find each other only on the same machine.
Once you've checked it, you also need to test that it's possible to connect from host 106.40.16.64 to 106.40.16.121(and vice versa) via 47500..47509 ports. It's possible that there is a firewall blocked connections or these ports is simply closed.
For example, it's possible to check it with netcat, run this from 106.40.16.64 host:
nc -z 106.40.16.121 47500

Apache Ignite connection between two nodes

I've recently started to learn about Apache Ignite and I have a newbie question. I'm trying to create 2 ignite node (1 server node and 1 client). I successfully started server node, but when I try to start client node I'm getting an error:
[04:23:19,478][SEVERE][grid-nio-worker-0-#29%testGrid-client2%][TcpCommunicationSpi] Closing NIO session because of unhandled exception.
class org.apache.ignite.internal.util.nio.GridNioException: null
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:1595)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:1516)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1289)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at org.apache.ignite.internal.util.nio.GridNioRecoveryDescriptor.ackReceived(GridNioRecoveryDescriptor.java:195)
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2.onMessage(TcpCommunicationSpi.java:559)
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2.onMessage(TcpCommunicationSpi.java:330)
at org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:270)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:107)
at org.apache.ignite.internal.util.nio.GridNioCodecFilter.onMessageReceived(GridNioCodecFilter.java:107)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:107)
at org.apache.ignite.internal.util.nio.GridConnectionBytesVerifyFilter.onMessageReceived(GridConnectionBytesVerifyFilter.java:123)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:107)
at org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onMessageReceived(GridNioServer.java:2332)
at org.apache.ignite.internal.util.nio.GridNioFilterChain.onMessageReceived(GridNioFilterChain.java:173)
at org.apache.ignite.internal.util.nio.GridNioServer$DirectNioClientWorker.processRead(GridNioServer.java:918)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:1583)
... 4 more
[04:23:19,495][SEVERE][grid-nio-worker-1-#30%testGrid-client2%][TcpCommunicationSpi] Closing NIO session because of unhandled exception.
class org.apache.ignite.internal.util.nio.GridNioException: null
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:1595)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:1516)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1289)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at org.apache.ignite.internal.util.nio.GridNioRecoveryDescriptor.ackReceived(GridNioRecoveryDescriptor.java:195)
at org.apache.ignite.internal.util.nio.GridNioRecoveryDescriptor.onHandshake(GridNioRecoveryDescriptor.java:278)
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2.connected(TcpCommunicationSpi.java:617)
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2.onFirstMessage(TcpCommunicationSpi.java:492)
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2.onMessage(TcpCommunicationSpi.java:540)
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2.onMessage(TcpCommunicationSpi.java:330)
at org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:270)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:107)
at org.apache.ignite.internal.util.nio.GridNioCodecFilter.onMessageReceived(GridNioCodecFilter.java:107)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:107)
at org.apache.ignite.internal.util.nio.GridConnectionBytesVerifyFilter.onMessageReceived(GridConnectionBytesVerifyFilter.java:113)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:107)
at org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onMessageReceived(GridNioServer.java:2332)
at org.apache.ignite.internal.util.nio.GridNioFilterChain.onMessageReceived(GridNioFilterChain.java:173)
at org.apache.ignite.internal.util.nio.GridNioServer$DirectNioClientWorker.processRead(GridNioServer.java:918)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:1583)
I run server node on AIX with default configuration from Ignite 1.7.0 distr. And I run client node on Win7 with following configuration:
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="gridName" value="testGrid-client2"/>
<property name="clientMode" value="true"/>
<!--<property name="peerClassLoadingEnabled" value="true"/>-->
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
<property name="addresses">
<list>
<value>10.xxx.xxx.xxx:47500..47509</value>
</list>
</property>
</bean>
</property>
</bean>
where 10.xxx.xxx.xxx is IP address of my AIX machine.
Big Endian architecture was supported pretty decent time ago. Presently, Ignite automatically detects an underlying platform endianness and switches to specific modes of operation. As far as I remember, Apache Ignite community members confirmed that Ignite works on Solaris and Power PC.

Ignite on server

I have a server,and have a container on server, on which started Ignite node(s).
And know that server configs (IP,container port etc.).
And want to connect(find) to this node from my PC(from Intellij Idea).
Namely I want to start another Ignite to which must connect to node on server.
How do my new starting node configuration?
With TcpDiscoverySpi or CommunicationSpi and how with IP and port.
You need to start a node on your PC with a configuration where IP finder, that is set for TcpDiscoverySpi will contain list of IPs and ports of your remote cluster.
Most likely it will be more than enough to configure static IP finder on your side.
Simply you can create the static IP finder the way below and set this discovery bean into configuration of all the nodes (servers and clients)
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<property name="addresses">
<list>
<!-- In distributed environment, replace with actual host IP address. -->
<value>server_1_ip:47500..47509</value>
<value>server_2_ip:47500..47509</value>
<value>server_3_ip:47500..47509</value>
</list>
</property>
</bean>
</property>
</bean>
</property>

SpringJMS 3.0.4 not connecting MQ 7.5 Connection using UserName

SpringJMS 3.0.4 not connecting MQ 7.5 Connection using UserName
Not able to connect MQ 7.5 Queue manager from Spring JMS version 3.0.4 with username. Username is passed to provide appropriate authorization to queue. We are using SpringJMS which uses MQ Client libraries available on the same machine. MQ Manager/Server is on remote machine
Below is the configuration we are using but we are getting error message as shown below
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jms="http://www.springframework.org/schema/jms" xmlns:p="http://www.springframework.org/schema/p" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/jms http://www.springframework.org/schema/jms/spring-jms-3.0.xsd">
<!-- :: Messaging Infrastructure Beans :: -->
<bean id="transport" class="org.springframework.beans.factory.config.FieldRetrievingFactoryBean" p:staticField="com.ibm.mq.jms.JMSC.MQJMS_TP_CLIENT_MQ_TCPIP" />
<bean id="mqConnectionFactory" class="com.ibm.mq.jms.MQQueueConnectionFactory" p:transportType-ref="transport"
p:queueManager="${tcs.messaging.queueManager.name}" p:hostName="${tcs.messaging.queueManager.host}" p:port="${tcs.messaging.queueManager.port}"
p:channel="${tcs.messaging.queueManager.channel}" />
<bean id="queueConnectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory"
p:targetConnectionFactory-ref="mqConnectionFactory" p:sessionCacheSize="${tcs.messaging.connectionFactory.sessionCacheSize}"
p:exceptionListener-ref="providerMessageListener" />
<bean id="providerMessageListener" class="com.uhg.treasury.customerservice.management.transport.jms.ProviderExceptionListener" />
<!-- New Addition :: -->
<bean id="myConnectionFactory2" class="org.springframework.jms.connection.UserCredentialsConnectionFactoryAdapter">
<property name="targetConnectionFactory"> <ref bean ="mqConnectionFactory"/> </property>
<property name="username"> <value>"tbossmqd"</value> </property>
<property name="password"> <value>"password1"</value> </property>
</bean>
</beans>
Error Message is as below
2014-06-27 11:25:42,503 [main] DEBUG - DefaultMessageListenerContainer.establishSharedConnection(752) | Could not establish shared JMS Connection - leaving it up to asynchronous invokers to establish a Connection as soon as possible
com.ibm.msg.client.jms.DetailedJMSSecurityException: JMSWMQ2013: The security authentication was not valid that was supplied for QueueManager 'WMQT013' with connection mode 'Client' and host name 'wmqlt0006.xxx.com(1960)'.
Please check if the supplied username and password are correct on the QueueManager to which you are connecting.
at com.ibm.msg.client.wmq.common.internal.Reason.reasonToException(Reason.java:521)
at com.ibm.msg.client.wmq.common.internal.Reason.createException(Reason.java:221)
at com.ibm.msg.client.wmq.internal.WMQConnection.(WMQConnection.java:426)
at com.ibm.msg.client.wmq.factories.WMQConnectionFactory.createV7ProviderConnection(WMQConnectionFactory.java:6902)
at com.ibm.msg.client.wmq.factories.WMQConnectionFactory.createProviderConnection(WMQConnectionFactory.java:6277)
at com.ibm.msg.client.jms.admin.JmsConnectionFactoryImpl.createConnection(JmsConnectionFactoryImpl.java:285)
at com.ibm.mq.jms.MQConnectionFactory.createCommonConnection(MQConnectionFactory.java:6233)
at com.ibm.mq.jms.MQQueueConnectionFactory.createQueueConnection(MQQueueConnectionFactory.java:120)
at com.ibm.mq.jms.MQQueueConnectionFactory.createConnection(MQQueueConnectionFactory.java:203)
at org.springframework.jms.connection.SingleConnectionFactory.doCreateConnection(SingleConnectionFactory.java:342)
at org.springframework.jms.connection.SingleConnectionFactory.initConnection(SingleConnectionFactory.java:288)
at org.springframework.jms.connection.SingleConnectionFactory.createConnection(SingleConnectionFactory.java:225)
at org.springframework.jms.support.JmsAccessor.createConnection(JmsAccessor.java:184)
at org.springframework.jms.listener.AbstractJmsListeningContainer.createSharedConnection(AbstractJmsListeningContainer.java:403)
at org.springframework.jms.listener.AbstractJmsListeningContainer.establishSharedConnection(AbstractJmsListeningContainer.java:371)
at org.springframework.jms.listener.DefaultMessageListenerContainer.establishSharedConnection(DefaultMessageListenerContainer.java:749)
at org.springframework.jms.listener.AbstractJmsListeningContainer.doStart(AbstractJmsListeningContainer.java:278)
at org.springframework.jms.listener.AbstractJmsListeningContainer.start(AbstractJmsListeningContainer.java:263)
at org.springframework.jms.listener.DefaultMessageListenerContainer.start(DefaultMessageListenerContainer.java:555)
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:166)
at org.springframework.context.support.DefaultLifecycleProcessor.access$1(DefaultLifecycleProcessor.java:154)
at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:335)
at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:143)
at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:108)
at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:908)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:428)
at org.springframework.context.support.ClassPathXmlApplicationContext.(ClassPathXmlApplicationContext.java:197)
at org.springframework.context.support.ClassPathXmlApplicationContext.(ClassPathXmlApplicationContext.java:172)
at org.springframework.context.support.ClassPathXmlApplicationContext.(ClassPathXmlApplicationContext.java:158)
at com.uhg.treasury.customerservice.management.Server.(Server.java:61)
at com.uhg.treasury.customerservice.management.Server.(Server.java:43)
Caused by: com.ibm.mq.MQException: JMSCMQ0001: WebSphere MQ call failed with compcode '2' ('MQCC_FAILED') reason '2035' ('MQRC_NOT_AUTHORIZED').
at com.ibm.msg.client.wmq.common.internal.Reason.createException(Reason.java:209)
... 29 more
2014-06-27 11:25:42,512 [main] DEBUG - AbstractJmsListeningContainer.resumePausedTasks(539) | Resumed paused task: org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker#627a94a9
2014-06-27 11:25:42,512 [main] DEBUG - AbstractJmsListeningContainer.resumePausedTasks(539) | Resumed paused task: org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker#5db615c1
I can confirm that the MQQueueuConnectionFactory.createConnection method been called by Spring is the version that passes no username/password. This is why you are seeing the MQRC_NOT_AUTHORIZED as the username is not being passed to the queue manager.
I am not a spring expert but I believe that the new myConnectionFactory2 bean that you have added needs to reference your CachingConnectionFactory bean (queueConnectionFactory) rather than directly referencing the MQQueueConnectionFactory bean (mqConnectionFactory). So change this:
<bean id="myConnectionFactory2" class="org.springframework.jms.connection.UserCredentialsConnectionFactoryAdapter">
<property name="targetConnectionFactory"> <ref bean ="mqConnectionFactory"/> </property>
<property name="username"> <value>"tbossmqd"</value> </property>
<property name="password"> <value>"password1"</value> </property>
</bean>
to be this:
<bean id="myConnectionFactory2" class="org.springframework.jms.connection.UserCredentialsConnectionFactoryAdapter">
<property name="targetConnectionFactory"> <ref bean ="queueConnectionFactory"/> </property>
<property name="username"> <value>"tbossmqd"</value> </property>
<property name="password"> <value>"password1"</value> </property>
</bean>