Error creating bean with name 'enableRedisKeyspaceNotificationsInitializer' - redis

I am able to connect to Azure Cache for Redis with the following Spring Session configuration:
<bean id="redisPassword" class="org.springframework.data.redis.connection.RedisPassword">
<constructor-arg index="0" value="xxxxxxxxxxxxxxxx"/>
</bean>
<bean id="redisStandaloneConfiguration" class="org.springframework.data.redis.connection.RedisStandaloneConfiguration">
<property name="hostName" value="acmedev.redis.cache.windows.net"/>
<property name="port" value="6380"/>
<property name="password" ref="redisPassword"/>
</bean>
<context:annotation-config/>
<bean class="org.springframework.session.data.redis.config.annotation.web.http.RedisHttpSessionConfiguration"/>
<bean class="org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory">
<constructor-arg index="0" ref="redisStandaloneConfiguration"/>
</bean>
My app successfully connects:
[lettuce-nioEventLoop-4-1] DEBUG io.lettuce.core.RedisClient - Connecting to Redis at acmedev.redis.cache.windows.net:6380: Success
The app then hangs for a while and I eventually get this error
11:22:54.712 [lettuce-nioEventLoop-4-1] DEBUG io.lettuce.core.protocol.CommandHandler - [channel=0xcf902cd8, /10.1.200.58:53533 -> acmedev.redis.cache.windows.net/52.240.141.200:6380, chid=0x1] Storing exception in connectionError
2020-02-19 11:22:54,713 WARN (org.springframework.context.support.AbstractApplicationContext:558) || - Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'enableRedisKeyspaceNotificationsInitializer' defined in class path resource [org/springframework/session/data/redis/config/annotation/web/http/RedisHttpSessionConfiguration.class]: Invocation of init method failed; nested exception is org.springframework.data.redis.RedisConnectionFailureException: Unable to connect to Redis; nested exception is io.lettuce.core.RedisConnectionException: Unable to connect to acmedev.redis.cache.windows.net:6380
11:22:54.719 [RMI TCP Connection(3)-127.0.0.1] DEBUG io.lettuce.core.RedisClient - Initiate shutdown (100, 100, MILLISECONDS)
[lettuce-nioEventLoop-4-1] DEBUG io.lettuce.core.protocol.CommandHandler - [channel=0xcf902cd8, /10.1.200.58:53533 -> acmedev.redis.cache.windows.net/52.240.141.200:6380, chid=0x1] Unexpected exception during request: java.io.IOException: An existing connection was forcibly closed by the remote host
java.io.IOException: An existing connection was forcibly closed by the remote host
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133)
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:745)
These same beans work just fine when I use redis running on localhost.
What am I doing wrong here?

First of all RedisHttpSessionConfiguration try(by default) enable keyspace notifications. But this is only working for not secured instances.
Docs form class ConfigureNotifyKeyspaceEventsAction
explain why it is work only on localhost:
This strategy will not work if the Redis instance has been properly secured. Instead,
the Redis instance should be configured externally and a Bean of type
ConfigureRedisAction#NO_OP should be exposed.
And also explain how it should be configured to work with secured Redis instance.
Simply use method: RedisHttpSessionConfiguration#setConfigureRedisAction
to set ConfigureRedisAction#NO_OP and then for example in your redis instance call: config set notify-keyspace-events Egx

<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:aop="http://www.springframework.org/schema/aop"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:tx="http://www.springframework.org/schema/tx" xmlns:context="http://www.springframework.org/schema/context"
xmlns:util="http://www.springframework.org/schema/util"
xmlns:p="http://www.springframework.org/schema/p"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.0.xsd
http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.0.xsd http://www.springframework.org/schema/context https://www.springframework.org/schema/context/spring-context.xsd
http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-3.1.xsd">
.
.
.
<util:constant id="configureRedisAction"
static-field="org.springframework.session.data.redis.config.ConfigureRedisAction.NO_OP"/>
<bean class="org.springframework.session.data.redis.config.annotation.web.http.RedisHttpSessionConfiguration" p:configureRedisAction-ref="configureRedisAction"/>

Related

Start ignite in persistent store mode and query using sqlline console

I am starting ignite db in persistent mode with following configuration -
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<!--
Alter configuration below as needed.
-->
<bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<!-- Enabling Apache Ignite Persistent Store. -->
<property name="dataStorageConfiguration">
<bean class="org.apache.ignite.configuration.DataStorageConfiguration">
<property name="defaultDataRegionConfiguration">
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="persistenceEnabled" value="true"/>
</bean>
</property>
</bean>
</property>
</bean>
</beans>
This configuration is in config/default-config.xml file
After that I am starting sqlline console using -
sqlline.sh --color=true --verbose=true -u
jdbc:ignite:thin://127.0.0.1/
In the console when I am executing any DDL query it gives following error -
java.sql.SQLException: Can not perform the operation because the cluster is inactive. Note, that the cluster is considered inactive by default if Ignite Persistent Store is used to let all the nodes join the cluster. To activate the cluster call Ignite.active(true).
at org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:749)
at org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:211)
at org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute(JdbcThinStatement.java:474)
at sqlline.Commands.execute(Commands.java:823)
at sqlline.Commands.sql(Commands.java:733)
at sqlline.SqlLine.dispatch(SqlLine.java:795)
at sqlline.SqlLine.begin(SqlLine.java:668)
at sqlline.SqlLine.start(SqlLine.java:373)
at sqlline.SqlLine.main(SqlLine.java:265)
How am I supposed to activate the cluster? where do I call Ignite.active(true)?
Note: I am using Gridgrain ignite community edition version 8.7.5 in ubuntu 18.0.4
The simplest way is to use the control.sh script, which you’ll find in the bin directory:
control.sh --activate
The other ways are documented.

class org.apache.ignite.IgniteException: Can not perform the operation because the cluster is inactive

After fiddling with the basic cluster, I tried another example for native persistence in Ignite by adding the below configuration in a fresh cluster.
<!-- Enabling Apache Ignite Persistent Store. -->
<property name="dataStorageConfiguration">
<bean class="org.apache.ignite.configuration.DataStorageConfiguration">
<property name="defaultDataRegionConfiguration">
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="persistenceEnabled" value="true"/>
</bean>
</property>
</bean>
</property>
However, I am unable to do create/insert/update/delete operations. I am facing the following error:
class org.apache.ignite.IgniteException: Can not perform the operation because the cluster is inactive. Note, that the cluster is considered inactive by default if Ignite Persistent Store is used to let all the nodes join the cluster. To activate the cluster call Ignite.active(true).
at org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2017)
at org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:1979)
at org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.executeQuery(JdbcRequestHandler.java:310)
at org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.handle(JdbcRequestHandler.java:169)
at org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:148)
at org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:41)
at org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
at org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at org.apache.ignite.internal.util.worker.GridWorkerPool$1.run(GridWorkerPool.java:70)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Any help will be appreciated.
Cluster needs to be explicitly activated if persistence is used. See details here: https://apacheignite.readme.io/docs/cluster-activation

Apache Ignite Cluster Does Not Start with Persistent Storage

I have three node (server) Apache Ignite cluster with one client. I am using disk based persistent storage. I created cache worth 10M records. AT some point the cluster crashed so I wanted to restart. This is what I am running into:
When I restart the server nodes, it throws the following exception. I have copied the exception message below.
The client blocks and it does not do anything and I do not see any exception message but it appears to be blocking with the following message.
I have inlcuded the default-config.xml here.
Any help in resolving this issue will be greatly appreciated. Thank you.
Server side exception
SEVERE: Failed to initialize cache. Will try to rollback cache start routine. [cacheName=geo10]
class org.apache.ignite.IgniteCheckedException: Failed to verify store file (invalid page size) [expectedPageSize=4096, filePageSize=2048]
at org.apache.ignite.internal.processors.cache.persistence.file.FilePageStore.checkFile(FilePageStore.java:185)
at org.apache.ignite.internal.processors.cache.persistence.file.FilePageStore.init(FilePageStore.java:392)
at org.apache.ignite.internal.processors.cache.persistence.file.FilePageStore.read(FilePageStore.java:291)
at org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.read(FilePageStoreManager.java:288)
at org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.read(FilePageStoreManager.java:273)
at org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.acquirePage(PageMemoryImpl.java:569)
at org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.acquirePage(PageMemoryImpl.java:487)
at org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager.getOrAllocateCacheMetas(GridCacheOffheapManager.java:515)
at org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager.initDataStructures(GridCacheOffheapManager.java:86)
at org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.start(IgniteCacheOffheapManagerImpl.java:139)
at org.apache.ignite.internal.processors.cache.CacheGroupContext.start(CacheGroupContext.java:868)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.startCacheGroup(GridCacheProcessor.java:1935)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1860)
at org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.onCacheChangeRequest(CacheAffinitySharedManager.java:748)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onClusterStateChangeRequest(GridDhtPartitionsExchangeFuture.java:773)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:574)
at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1901)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)
Sep 10, 2017 2:42:46 PM org.apache.ignite.logger.java.JavaLogger error
SEVERE: Failed to perform final activation steps [nodeId=2077e165-e8a2-4989-934c-c24c5c0bea80, client=false, topVer=AffinityTopologyVersion [topVer=1, minorTopVer=1]]
java.lang.NullPointerException
at org.apache.ignite.internal.processors.service.GridServiceProcessor.onKernalStart0(GridServiceProcessor.java:240)
at org.apache.ignite.internal.processors.service.GridServiceProcessor.onActivate(GridServiceProcessor.java:370)
at org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor$5.run(GridClusterStateProcessor.java:576)
at org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6664)
at org.apache.ignite.internal.processors.closure.GridClosureProcessor$1.body(GridClosureProcessor.java:817)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
class org.apache.ignite.IgniteException: null
at org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:957)
at org.apache.ignite.internal.IgniteKernal.active(IgniteKernal.java:3427)
at com.accure.ignite.IgniteStarter.main(IgniteStarter.java:24)
Caused by: class org.apache.ignite.IgniteCheckedException: null
at org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor$GridChangeGlobalStateFuture.onAllReceived(GridClusterStateProcessor.java:816)
at org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor$GridChangeGlobalStateFuture.onResponse(GridClusterStateProcessor.java:809)
at org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor.processChangeGlobalStateResponse(GridClusterStateProcessor.java:673)
at org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor.sendChangeGlobalStateResponse(GridClusterStateProcessor.java:639)
at org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor.access$2200(GridClusterStateProcessor.java:72)
at org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor$5.run(GridClusterStateProcessor.java:597)
at org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6664)
at org.apache.ignite.internal.processors.closure.GridClosureProcessor$1.body(GridClosureProcessor.java:817)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Suppressed: class org.apache.ignite.IgniteCheckedException: Failed to perform final activation steps
at org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor$5.run(GridClusterStateProcessor.java:589)
... 6 more
Caused by: java.lang.NullPointerException
at org.apache.ignite.internal.processors.service.GridServiceProcessor.onKernalStart0(GridServiceProcessor.java:240)
at org.apache.ignite.internal.processors.service.GridServiceProcessor.onActivate(GridServiceProcessor.java:370)
at org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor$5.run(GridClusterStateProcessor.java:576)
... 6 more
[14:43:18] Topology snapshot [ver=2, servers=1, clients=1, CPUs=8, heap=18.0GB]
Sep 10, 2017 2:43:18 PM org.apache.ignite.logger.java.JavaLogger error
SEVERE: Error when executing service: null
java.lang.NullPointerException
at org.apache.ignite.internal.processors.service.GridServiceProcessor.serviceEntries(GridServiceProcessor.java:1289)
at org.apache.ignite.internal.processors.service.GridServiceProcessor.access$2000(GridServiceProcessor.java:119)
at org.apache.ignite.internal.processors.service.GridServiceProcessor$TopologyListener$1.run0(GridServiceProcessor.java:1578)
at org.apache.ignite.internal.processors.service.GridServiceProcessor$DepRunnable.run(GridServiceProcessor.java:1806)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Client Side Exception
[14:43:15] Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides.
[14:43:16] Security status [authentication=off, tls/ssl=off]
[14:43:16] REST protocols do not start on client node. To start the protocols on client node set '-DIGNITE_REST_START_ON_CLIENT=true' system property.
default-config.xml
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<!-- Enabling Apache Ignite Persistent Store. -->
<property name="persistentStoreConfiguration">
<bean class="org.apache.ignite.configuration.PersistentStoreConfiguration"/>
</property>
<property name="binaryConfiguration">
<bean class="org.apache.ignite.configuration.BinaryConfiguration">
<property name="compactFooter" value="false"/>
</bean>
</property>
<property name="memoryConfiguration">
<bean class="org.apache.ignite.configuration.MemoryConfiguration">
<!-- Setting the page size to 4 KB -->
<property name="pageSize" value="#{4 * 1024}"/>
</bean>
</property>
<!-- Explicitly configure TCP discovery SPI to provide a list of initial nodes. -->
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
<property name="addresses">
<list>
<!-- In distributed environment, replace with actual host IP address. -->
<value>127.0.0.1:55500..55502</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
</bean>
After I made changes in the default-config to use the pageSize=2Kb, the server still does not start and show the following exception message. Here is the stacktrace.
SEVERE: Failed to reinitialize local partitions (preloading will be stopped): GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=3, minorTopVer=0], nodeId=4a2cb984, evt=NODE_JOINED]
class org.apache.ignite.IgniteCheckedException: WAL history is too short [descs=[org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$FileDescriptor#1d9, org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$FileDescriptor#1da, org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$FileDescriptor#1db, org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$FileDescriptor#1dc, org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$FileDescriptor#1dd, org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$FileDescriptor#1de, org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$FileDescriptor#1df, org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$FileDescriptor#1e0, org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$FileDescriptor#1e1, org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$FileDescriptor#1e2, org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$FileDescriptor#1e3, org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$FileDescriptor#1e4], start=FileWALPointer [idx=0, fileOffset=0, len=0, forceFlush=false]]
Looks like first, you started node with default pageSize and later you changed it to:
so now Ignite can not read storage files because it expected to find it with pageSize 4kb while actual store files have page size 2kb.
Try to set it back to 2kb.

Apache Ignite connection between two nodes

I've recently started to learn about Apache Ignite and I have a newbie question. I'm trying to create 2 ignite node (1 server node and 1 client). I successfully started server node, but when I try to start client node I'm getting an error:
[04:23:19,478][SEVERE][grid-nio-worker-0-#29%testGrid-client2%][TcpCommunicationSpi] Closing NIO session because of unhandled exception.
class org.apache.ignite.internal.util.nio.GridNioException: null
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:1595)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:1516)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1289)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at org.apache.ignite.internal.util.nio.GridNioRecoveryDescriptor.ackReceived(GridNioRecoveryDescriptor.java:195)
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2.onMessage(TcpCommunicationSpi.java:559)
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2.onMessage(TcpCommunicationSpi.java:330)
at org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:270)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:107)
at org.apache.ignite.internal.util.nio.GridNioCodecFilter.onMessageReceived(GridNioCodecFilter.java:107)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:107)
at org.apache.ignite.internal.util.nio.GridConnectionBytesVerifyFilter.onMessageReceived(GridConnectionBytesVerifyFilter.java:123)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:107)
at org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onMessageReceived(GridNioServer.java:2332)
at org.apache.ignite.internal.util.nio.GridNioFilterChain.onMessageReceived(GridNioFilterChain.java:173)
at org.apache.ignite.internal.util.nio.GridNioServer$DirectNioClientWorker.processRead(GridNioServer.java:918)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:1583)
... 4 more
[04:23:19,495][SEVERE][grid-nio-worker-1-#30%testGrid-client2%][TcpCommunicationSpi] Closing NIO session because of unhandled exception.
class org.apache.ignite.internal.util.nio.GridNioException: null
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:1595)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:1516)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1289)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at org.apache.ignite.internal.util.nio.GridNioRecoveryDescriptor.ackReceived(GridNioRecoveryDescriptor.java:195)
at org.apache.ignite.internal.util.nio.GridNioRecoveryDescriptor.onHandshake(GridNioRecoveryDescriptor.java:278)
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2.connected(TcpCommunicationSpi.java:617)
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2.onFirstMessage(TcpCommunicationSpi.java:492)
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2.onMessage(TcpCommunicationSpi.java:540)
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2.onMessage(TcpCommunicationSpi.java:330)
at org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:270)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:107)
at org.apache.ignite.internal.util.nio.GridNioCodecFilter.onMessageReceived(GridNioCodecFilter.java:107)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:107)
at org.apache.ignite.internal.util.nio.GridConnectionBytesVerifyFilter.onMessageReceived(GridConnectionBytesVerifyFilter.java:113)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:107)
at org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onMessageReceived(GridNioServer.java:2332)
at org.apache.ignite.internal.util.nio.GridNioFilterChain.onMessageReceived(GridNioFilterChain.java:173)
at org.apache.ignite.internal.util.nio.GridNioServer$DirectNioClientWorker.processRead(GridNioServer.java:918)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:1583)
I run server node on AIX with default configuration from Ignite 1.7.0 distr. And I run client node on Win7 with following configuration:
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="gridName" value="testGrid-client2"/>
<property name="clientMode" value="true"/>
<!--<property name="peerClassLoadingEnabled" value="true"/>-->
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
<property name="addresses">
<list>
<value>10.xxx.xxx.xxx:47500..47509</value>
</list>
</property>
</bean>
</property>
</bean>
where 10.xxx.xxx.xxx is IP address of my AIX machine.
Big Endian architecture was supported pretty decent time ago. Presently, Ignite automatically detects an underlying platform endianness and switches to specific modes of operation. As far as I remember, Apache Ignite community members confirmed that Ignite works on Solaris and Power PC.

SpringJMS 3.0.4 not connecting MQ 7.5 Connection using UserName

SpringJMS 3.0.4 not connecting MQ 7.5 Connection using UserName
Not able to connect MQ 7.5 Queue manager from Spring JMS version 3.0.4 with username. Username is passed to provide appropriate authorization to queue. We are using SpringJMS which uses MQ Client libraries available on the same machine. MQ Manager/Server is on remote machine
Below is the configuration we are using but we are getting error message as shown below
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jms="http://www.springframework.org/schema/jms" xmlns:p="http://www.springframework.org/schema/p" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/jms http://www.springframework.org/schema/jms/spring-jms-3.0.xsd">
<!-- :: Messaging Infrastructure Beans :: -->
<bean id="transport" class="org.springframework.beans.factory.config.FieldRetrievingFactoryBean" p:staticField="com.ibm.mq.jms.JMSC.MQJMS_TP_CLIENT_MQ_TCPIP" />
<bean id="mqConnectionFactory" class="com.ibm.mq.jms.MQQueueConnectionFactory" p:transportType-ref="transport"
p:queueManager="${tcs.messaging.queueManager.name}" p:hostName="${tcs.messaging.queueManager.host}" p:port="${tcs.messaging.queueManager.port}"
p:channel="${tcs.messaging.queueManager.channel}" />
<bean id="queueConnectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory"
p:targetConnectionFactory-ref="mqConnectionFactory" p:sessionCacheSize="${tcs.messaging.connectionFactory.sessionCacheSize}"
p:exceptionListener-ref="providerMessageListener" />
<bean id="providerMessageListener" class="com.uhg.treasury.customerservice.management.transport.jms.ProviderExceptionListener" />
<!-- New Addition :: -->
<bean id="myConnectionFactory2" class="org.springframework.jms.connection.UserCredentialsConnectionFactoryAdapter">
<property name="targetConnectionFactory"> <ref bean ="mqConnectionFactory"/> </property>
<property name="username"> <value>"tbossmqd"</value> </property>
<property name="password"> <value>"password1"</value> </property>
</bean>
</beans>
Error Message is as below
2014-06-27 11:25:42,503 [main] DEBUG - DefaultMessageListenerContainer.establishSharedConnection(752) | Could not establish shared JMS Connection - leaving it up to asynchronous invokers to establish a Connection as soon as possible
com.ibm.msg.client.jms.DetailedJMSSecurityException: JMSWMQ2013: The security authentication was not valid that was supplied for QueueManager 'WMQT013' with connection mode 'Client' and host name 'wmqlt0006.xxx.com(1960)'.
Please check if the supplied username and password are correct on the QueueManager to which you are connecting.
at com.ibm.msg.client.wmq.common.internal.Reason.reasonToException(Reason.java:521)
at com.ibm.msg.client.wmq.common.internal.Reason.createException(Reason.java:221)
at com.ibm.msg.client.wmq.internal.WMQConnection.(WMQConnection.java:426)
at com.ibm.msg.client.wmq.factories.WMQConnectionFactory.createV7ProviderConnection(WMQConnectionFactory.java:6902)
at com.ibm.msg.client.wmq.factories.WMQConnectionFactory.createProviderConnection(WMQConnectionFactory.java:6277)
at com.ibm.msg.client.jms.admin.JmsConnectionFactoryImpl.createConnection(JmsConnectionFactoryImpl.java:285)
at com.ibm.mq.jms.MQConnectionFactory.createCommonConnection(MQConnectionFactory.java:6233)
at com.ibm.mq.jms.MQQueueConnectionFactory.createQueueConnection(MQQueueConnectionFactory.java:120)
at com.ibm.mq.jms.MQQueueConnectionFactory.createConnection(MQQueueConnectionFactory.java:203)
at org.springframework.jms.connection.SingleConnectionFactory.doCreateConnection(SingleConnectionFactory.java:342)
at org.springframework.jms.connection.SingleConnectionFactory.initConnection(SingleConnectionFactory.java:288)
at org.springframework.jms.connection.SingleConnectionFactory.createConnection(SingleConnectionFactory.java:225)
at org.springframework.jms.support.JmsAccessor.createConnection(JmsAccessor.java:184)
at org.springframework.jms.listener.AbstractJmsListeningContainer.createSharedConnection(AbstractJmsListeningContainer.java:403)
at org.springframework.jms.listener.AbstractJmsListeningContainer.establishSharedConnection(AbstractJmsListeningContainer.java:371)
at org.springframework.jms.listener.DefaultMessageListenerContainer.establishSharedConnection(DefaultMessageListenerContainer.java:749)
at org.springframework.jms.listener.AbstractJmsListeningContainer.doStart(AbstractJmsListeningContainer.java:278)
at org.springframework.jms.listener.AbstractJmsListeningContainer.start(AbstractJmsListeningContainer.java:263)
at org.springframework.jms.listener.DefaultMessageListenerContainer.start(DefaultMessageListenerContainer.java:555)
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:166)
at org.springframework.context.support.DefaultLifecycleProcessor.access$1(DefaultLifecycleProcessor.java:154)
at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:335)
at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:143)
at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:108)
at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:908)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:428)
at org.springframework.context.support.ClassPathXmlApplicationContext.(ClassPathXmlApplicationContext.java:197)
at org.springframework.context.support.ClassPathXmlApplicationContext.(ClassPathXmlApplicationContext.java:172)
at org.springframework.context.support.ClassPathXmlApplicationContext.(ClassPathXmlApplicationContext.java:158)
at com.uhg.treasury.customerservice.management.Server.(Server.java:61)
at com.uhg.treasury.customerservice.management.Server.(Server.java:43)
Caused by: com.ibm.mq.MQException: JMSCMQ0001: WebSphere MQ call failed with compcode '2' ('MQCC_FAILED') reason '2035' ('MQRC_NOT_AUTHORIZED').
at com.ibm.msg.client.wmq.common.internal.Reason.createException(Reason.java:209)
... 29 more
2014-06-27 11:25:42,512 [main] DEBUG - AbstractJmsListeningContainer.resumePausedTasks(539) | Resumed paused task: org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker#627a94a9
2014-06-27 11:25:42,512 [main] DEBUG - AbstractJmsListeningContainer.resumePausedTasks(539) | Resumed paused task: org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker#5db615c1
I can confirm that the MQQueueuConnectionFactory.createConnection method been called by Spring is the version that passes no username/password. This is why you are seeing the MQRC_NOT_AUTHORIZED as the username is not being passed to the queue manager.
I am not a spring expert but I believe that the new myConnectionFactory2 bean that you have added needs to reference your CachingConnectionFactory bean (queueConnectionFactory) rather than directly referencing the MQQueueConnectionFactory bean (mqConnectionFactory). So change this:
<bean id="myConnectionFactory2" class="org.springframework.jms.connection.UserCredentialsConnectionFactoryAdapter">
<property name="targetConnectionFactory"> <ref bean ="mqConnectionFactory"/> </property>
<property name="username"> <value>"tbossmqd"</value> </property>
<property name="password"> <value>"password1"</value> </property>
</bean>
to be this:
<bean id="myConnectionFactory2" class="org.springframework.jms.connection.UserCredentialsConnectionFactoryAdapter">
<property name="targetConnectionFactory"> <ref bean ="queueConnectionFactory"/> </property>
<property name="username"> <value>"tbossmqd"</value> </property>
<property name="password"> <value>"password1"</value> </property>
</bean>