I have three node (server) Apache Ignite cluster with one client. I am using disk based persistent storage. I created cache worth 10M records. AT some point the cluster crashed so I wanted to restart. This is what I am running into:
When I restart the server nodes, it throws the following exception. I have copied the exception message below.
The client blocks and it does not do anything and I do not see any exception message but it appears to be blocking with the following message.
I have inlcuded the default-config.xml here.
Any help in resolving this issue will be greatly appreciated. Thank you.
Server side exception
SEVERE: Failed to initialize cache. Will try to rollback cache start routine. [cacheName=geo10]
class org.apache.ignite.IgniteCheckedException: Failed to verify store file (invalid page size) [expectedPageSize=4096, filePageSize=2048]
at org.apache.ignite.internal.processors.cache.persistence.file.FilePageStore.checkFile(FilePageStore.java:185)
at org.apache.ignite.internal.processors.cache.persistence.file.FilePageStore.init(FilePageStore.java:392)
at org.apache.ignite.internal.processors.cache.persistence.file.FilePageStore.read(FilePageStore.java:291)
at org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.read(FilePageStoreManager.java:288)
at org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.read(FilePageStoreManager.java:273)
at org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.acquirePage(PageMemoryImpl.java:569)
at org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.acquirePage(PageMemoryImpl.java:487)
at org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager.getOrAllocateCacheMetas(GridCacheOffheapManager.java:515)
at org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager.initDataStructures(GridCacheOffheapManager.java:86)
at org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.start(IgniteCacheOffheapManagerImpl.java:139)
at org.apache.ignite.internal.processors.cache.CacheGroupContext.start(CacheGroupContext.java:868)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.startCacheGroup(GridCacheProcessor.java:1935)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1860)
at org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.onCacheChangeRequest(CacheAffinitySharedManager.java:748)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onClusterStateChangeRequest(GridDhtPartitionsExchangeFuture.java:773)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:574)
at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1901)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)
Sep 10, 2017 2:42:46 PM org.apache.ignite.logger.java.JavaLogger error
SEVERE: Failed to perform final activation steps [nodeId=2077e165-e8a2-4989-934c-c24c5c0bea80, client=false, topVer=AffinityTopologyVersion [topVer=1, minorTopVer=1]]
java.lang.NullPointerException
at org.apache.ignite.internal.processors.service.GridServiceProcessor.onKernalStart0(GridServiceProcessor.java:240)
at org.apache.ignite.internal.processors.service.GridServiceProcessor.onActivate(GridServiceProcessor.java:370)
at org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor$5.run(GridClusterStateProcessor.java:576)
at org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6664)
at org.apache.ignite.internal.processors.closure.GridClosureProcessor$1.body(GridClosureProcessor.java:817)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
class org.apache.ignite.IgniteException: null
at org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:957)
at org.apache.ignite.internal.IgniteKernal.active(IgniteKernal.java:3427)
at com.accure.ignite.IgniteStarter.main(IgniteStarter.java:24)
Caused by: class org.apache.ignite.IgniteCheckedException: null
at org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor$GridChangeGlobalStateFuture.onAllReceived(GridClusterStateProcessor.java:816)
at org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor$GridChangeGlobalStateFuture.onResponse(GridClusterStateProcessor.java:809)
at org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor.processChangeGlobalStateResponse(GridClusterStateProcessor.java:673)
at org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor.sendChangeGlobalStateResponse(GridClusterStateProcessor.java:639)
at org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor.access$2200(GridClusterStateProcessor.java:72)
at org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor$5.run(GridClusterStateProcessor.java:597)
at org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6664)
at org.apache.ignite.internal.processors.closure.GridClosureProcessor$1.body(GridClosureProcessor.java:817)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Suppressed: class org.apache.ignite.IgniteCheckedException: Failed to perform final activation steps
at org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor$5.run(GridClusterStateProcessor.java:589)
... 6 more
Caused by: java.lang.NullPointerException
at org.apache.ignite.internal.processors.service.GridServiceProcessor.onKernalStart0(GridServiceProcessor.java:240)
at org.apache.ignite.internal.processors.service.GridServiceProcessor.onActivate(GridServiceProcessor.java:370)
at org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor$5.run(GridClusterStateProcessor.java:576)
... 6 more
[14:43:18] Topology snapshot [ver=2, servers=1, clients=1, CPUs=8, heap=18.0GB]
Sep 10, 2017 2:43:18 PM org.apache.ignite.logger.java.JavaLogger error
SEVERE: Error when executing service: null
java.lang.NullPointerException
at org.apache.ignite.internal.processors.service.GridServiceProcessor.serviceEntries(GridServiceProcessor.java:1289)
at org.apache.ignite.internal.processors.service.GridServiceProcessor.access$2000(GridServiceProcessor.java:119)
at org.apache.ignite.internal.processors.service.GridServiceProcessor$TopologyListener$1.run0(GridServiceProcessor.java:1578)
at org.apache.ignite.internal.processors.service.GridServiceProcessor$DepRunnable.run(GridServiceProcessor.java:1806)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Client Side Exception
[14:43:15] Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides.
[14:43:16] Security status [authentication=off, tls/ssl=off]
[14:43:16] REST protocols do not start on client node. To start the protocols on client node set '-DIGNITE_REST_START_ON_CLIENT=true' system property.
default-config.xml
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<!-- Enabling Apache Ignite Persistent Store. -->
<property name="persistentStoreConfiguration">
<bean class="org.apache.ignite.configuration.PersistentStoreConfiguration"/>
</property>
<property name="binaryConfiguration">
<bean class="org.apache.ignite.configuration.BinaryConfiguration">
<property name="compactFooter" value="false"/>
</bean>
</property>
<property name="memoryConfiguration">
<bean class="org.apache.ignite.configuration.MemoryConfiguration">
<!-- Setting the page size to 4 KB -->
<property name="pageSize" value="#{4 * 1024}"/>
</bean>
</property>
<!-- Explicitly configure TCP discovery SPI to provide a list of initial nodes. -->
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
<property name="addresses">
<list>
<!-- In distributed environment, replace with actual host IP address. -->
<value>127.0.0.1:55500..55502</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
</bean>
After I made changes in the default-config to use the pageSize=2Kb, the server still does not start and show the following exception message. Here is the stacktrace.
SEVERE: Failed to reinitialize local partitions (preloading will be stopped): GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=3, minorTopVer=0], nodeId=4a2cb984, evt=NODE_JOINED]
class org.apache.ignite.IgniteCheckedException: WAL history is too short [descs=[org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$FileDescriptor#1d9, org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$FileDescriptor#1da, org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$FileDescriptor#1db, org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$FileDescriptor#1dc, org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$FileDescriptor#1dd, org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$FileDescriptor#1de, org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$FileDescriptor#1df, org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$FileDescriptor#1e0, org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$FileDescriptor#1e1, org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$FileDescriptor#1e2, org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$FileDescriptor#1e3, org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$FileDescriptor#1e4], start=FileWALPointer [idx=0, fileOffset=0, len=0, forceFlush=false]]
Looks like first, you started node with default pageSize and later you changed it to:
so now Ignite can not read storage files because it expected to find it with pageSize 4kb while actual store files have page size 2kb.
Try to set it back to 2kb.
Related
I have enabled Ignite native persistence and disabled WAL log:
<property name="dataStorageConfiguration">
<bean class="org.apache.ignite.configuration.DataStorageConfiguration">
<property name="defaultDataRegionConfiguration">
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="persistenceEnabled" value="true"/>
</bean>
</property>
<!-- disabled wal log because query result doesn't need recovery -->
<property name="walMode" value="NONE"/>
</bean>
</property>
<property name="cacheConfiguration">
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<!-- Set the cache name. -->
<property name="name" value="query_cache"/>
<!-- Set the cache mode. -->
<property name="cacheMode" value="PARTITIONED"/>
</bean>
</property>
I start server by application and operate cache in another class:
public class IgniteServer {
public static void main(String[] args) {
Ignite ignite = Ignition.start("examples/config/ignite-server-config.xml");
ignite.cluster().state(ClusterState.ACTIVE);
}
}
try (Ignite ignite = Ignition.start("examples/config/ignite-server-config.xml")) {
IgniteCache cache = ignite.getOrCreateCache("query_cache");
cache.put("1", "value-1");
System.out.println(cache.get("1"));
}
This is working fine, but after stopping IgniteServer I can't restart it again with following error:
[15:28:20] Initialized write-ahead log manager in NONE mode, persisted data may be lost in a case of unexpected node failure. Make sure to deactivate the cluster before shutdown.
[2021-03-18 15:28:20,876][ERROR][main][root] Critical system error detected. Will be handled accordingly to configured handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext [type=CRITICAL_ERROR, err=class o.a.i.i.processors.cache.persistence.StorageException: Failed to read checkpoint record from WAL, persistence consistency cannot be guaranteed. Make sure configuration points to correct WAL folders and WAL folder is properly mounted [ptr=FileWALPointer [idx=0, fileOff=0, len=0], walPath=db/wal, walArchive=db/wal/archive]]]
class org.apache.ignite.internal.processors.cache.persistence.StorageException: Failed to read checkpoint record from WAL, persistence consistency cannot be guaranteed. Make sure configuration points to correct WAL folders and WAL folder is properly mounted [ptr=FileWALPointer [idx=0, fileOff=0, len=0], walPath=db/wal, walArchive=db/wal/archive]
at org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.performBinaryMemoryRestore(GridCacheDatabaseSharedManager.java:2269)
at org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.readMetastore(GridCacheDatabaseSharedManager.java:873)
at org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.notifyMetaStorageSubscribersOnReadyForRead(GridCacheDatabaseSharedManager.java:5022)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1251)
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2052)
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1698)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1114)
at org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1032)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:918)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:817)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:687)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:656)
at org.apache.ignite.Ignition.start(Ignition.java:353)
at org.apache.ignite.examples.atest.IgniteServer.main(IgniteServer.java:9)
[2021-03-18 15:28:20,881][ERROR][main][root] JVM will be halted immediately due to the failure: [failureCtx=FailureContext [type=CRITICAL_ERROR, err=class o.a.i.i.processors.cache.persistence.StorageException: Failed to read checkpoint record from WAL, persistence consistency cannot be guaranteed. Make sure configuration points to correct WAL folders and WAL folder is properly mounted [ptr=FileWALPointer [idx=0, fileOff=0, len=0], walPath=db/wal, walArchive=db/wal/archive]]]
Sometimes server is shutdown automatically with message:
FileWriteAheadLogManager: Reached logical end of the segment for file: /ignite/work/db/wal/node-xxxx/xxx.wal
I have disabled WAL log, I don't know why it still read checkpoint and failed. I checked $IGNITE_HOME/work/db/wal/node-xxx/, I found 10 wal files and all of them with size 67.1MB, seems there is a infinite loop and fill them all. After I deleted work folder I can start the serve again.
Questions:
How can I fix this problem with native persistence on and WAL log off
Seems like I shutdown server in a wrong way, how can I stop server safely by code without checking checkpoint?
Thanks.
I advise against walMode=NONE. If you have to use it, make sure to remove the whole persistence directory before restarting node, or
Try calling ignite.cluster().state(INACTIVE) before shutting down any nodes.
I am able to connect to Azure Cache for Redis with the following Spring Session configuration:
<bean id="redisPassword" class="org.springframework.data.redis.connection.RedisPassword">
<constructor-arg index="0" value="xxxxxxxxxxxxxxxx"/>
</bean>
<bean id="redisStandaloneConfiguration" class="org.springframework.data.redis.connection.RedisStandaloneConfiguration">
<property name="hostName" value="acmedev.redis.cache.windows.net"/>
<property name="port" value="6380"/>
<property name="password" ref="redisPassword"/>
</bean>
<context:annotation-config/>
<bean class="org.springframework.session.data.redis.config.annotation.web.http.RedisHttpSessionConfiguration"/>
<bean class="org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory">
<constructor-arg index="0" ref="redisStandaloneConfiguration"/>
</bean>
My app successfully connects:
[lettuce-nioEventLoop-4-1] DEBUG io.lettuce.core.RedisClient - Connecting to Redis at acmedev.redis.cache.windows.net:6380: Success
The app then hangs for a while and I eventually get this error
11:22:54.712 [lettuce-nioEventLoop-4-1] DEBUG io.lettuce.core.protocol.CommandHandler - [channel=0xcf902cd8, /10.1.200.58:53533 -> acmedev.redis.cache.windows.net/52.240.141.200:6380, chid=0x1] Storing exception in connectionError
2020-02-19 11:22:54,713 WARN (org.springframework.context.support.AbstractApplicationContext:558) || - Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'enableRedisKeyspaceNotificationsInitializer' defined in class path resource [org/springframework/session/data/redis/config/annotation/web/http/RedisHttpSessionConfiguration.class]: Invocation of init method failed; nested exception is org.springframework.data.redis.RedisConnectionFailureException: Unable to connect to Redis; nested exception is io.lettuce.core.RedisConnectionException: Unable to connect to acmedev.redis.cache.windows.net:6380
11:22:54.719 [RMI TCP Connection(3)-127.0.0.1] DEBUG io.lettuce.core.RedisClient - Initiate shutdown (100, 100, MILLISECONDS)
[lettuce-nioEventLoop-4-1] DEBUG io.lettuce.core.protocol.CommandHandler - [channel=0xcf902cd8, /10.1.200.58:53533 -> acmedev.redis.cache.windows.net/52.240.141.200:6380, chid=0x1] Unexpected exception during request: java.io.IOException: An existing connection was forcibly closed by the remote host
java.io.IOException: An existing connection was forcibly closed by the remote host
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133)
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:745)
These same beans work just fine when I use redis running on localhost.
What am I doing wrong here?
First of all RedisHttpSessionConfiguration try(by default) enable keyspace notifications. But this is only working for not secured instances.
Docs form class ConfigureNotifyKeyspaceEventsAction
explain why it is work only on localhost:
This strategy will not work if the Redis instance has been properly secured. Instead,
the Redis instance should be configured externally and a Bean of type
ConfigureRedisAction#NO_OP should be exposed.
And also explain how it should be configured to work with secured Redis instance.
Simply use method: RedisHttpSessionConfiguration#setConfigureRedisAction
to set ConfigureRedisAction#NO_OP and then for example in your redis instance call: config set notify-keyspace-events Egx
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:aop="http://www.springframework.org/schema/aop"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:tx="http://www.springframework.org/schema/tx" xmlns:context="http://www.springframework.org/schema/context"
xmlns:util="http://www.springframework.org/schema/util"
xmlns:p="http://www.springframework.org/schema/p"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.0.xsd
http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.0.xsd http://www.springframework.org/schema/context https://www.springframework.org/schema/context/spring-context.xsd
http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-3.1.xsd">
.
.
.
<util:constant id="configureRedisAction"
static-field="org.springframework.session.data.redis.config.ConfigureRedisAction.NO_OP"/>
<bean class="org.springframework.session.data.redis.config.annotation.web.http.RedisHttpSessionConfiguration" p:configureRedisAction-ref="configureRedisAction"/>
I was trying to run my application that uses an Apache ignite cache in a local Windows machine and got the below error:
ERROR [exchange-worker-#42%ignite-instance-0%] [] - Critical system error detected. Will be handled accordingly to configured handler [hnd=class o.a.i.failure.StopNodeOrHaltFailureHandler, failureCtx=FailureContext [type=CRITICAL_ERROR, err=class o.a.i.i.pagemem.wal.StorageException: Restore wal pointer = null, while status.endPtr = FileWALPointer [idx=0, fileOff=3746370, len=53]. Can't restore memory - critical part of WAL archive is missing.]]
class org.apache.ignite.internal.pagemem.wal.StorageException: Restore wal pointer = null, while status.endPtr = FileWALPointer [idx=0, fileOff=3746370, len=53]. Can't restore memory - critical part of WAL archive is missing.
at org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.readCheckpointAndRestoreMemory(GridCacheDatabaseSharedManager.java:759)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onClusterStateChangeRequest(GridDhtPartitionsExchangeFuture.java:894)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:641)
at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:2419)
at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2299)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:748)
Any idea what might have gone wrong?
My dataStorageConfiguration is :
<property name="dataStorageConfiguration">
<bean class="org.apache.ignite.configuration.DataStorageConfiguration">
<property name="defaultDataRegionConfiguration">
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="persistenceEnabled" value="true"/>
</bean>
</property>
<property name="walMode" value="LOG_ONLY"/>
<property name="walCompactionEnabled" value="true" />
</bean>
</property>
Did you lose your WAL? If you don't care about any preexisting data at all, consider removing your Ignite work dir (typically ignite/work or %TMP%/ignite/work).
To summarize the solution:
Delete the following directory
Windows : C:\Users\\AppData\Local\Temp\ignite
Linux : find the directory with heirarchy ignit/work and delete it.
After fiddling with the basic cluster, I tried another example for native persistence in Ignite by adding the below configuration in a fresh cluster.
<!-- Enabling Apache Ignite Persistent Store. -->
<property name="dataStorageConfiguration">
<bean class="org.apache.ignite.configuration.DataStorageConfiguration">
<property name="defaultDataRegionConfiguration">
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="persistenceEnabled" value="true"/>
</bean>
</property>
</bean>
</property>
However, I am unable to do create/insert/update/delete operations. I am facing the following error:
class org.apache.ignite.IgniteException: Can not perform the operation because the cluster is inactive. Note, that the cluster is considered inactive by default if Ignite Persistent Store is used to let all the nodes join the cluster. To activate the cluster call Ignite.active(true).
at org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2017)
at org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:1979)
at org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.executeQuery(JdbcRequestHandler.java:310)
at org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.handle(JdbcRequestHandler.java:169)
at org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:148)
at org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:41)
at org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
at org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at org.apache.ignite.internal.util.worker.GridWorkerPool$1.run(GridWorkerPool.java:70)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Any help will be appreciated.
Cluster needs to be explicitly activated if persistence is used. See details here: https://apacheignite.readme.io/docs/cluster-activation
I've recently started to learn about Apache Ignite and I have a newbie question. I'm trying to create 2 ignite node (1 server node and 1 client). I successfully started server node, but when I try to start client node I'm getting an error:
[04:23:19,478][SEVERE][grid-nio-worker-0-#29%testGrid-client2%][TcpCommunicationSpi] Closing NIO session because of unhandled exception.
class org.apache.ignite.internal.util.nio.GridNioException: null
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:1595)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:1516)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1289)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at org.apache.ignite.internal.util.nio.GridNioRecoveryDescriptor.ackReceived(GridNioRecoveryDescriptor.java:195)
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2.onMessage(TcpCommunicationSpi.java:559)
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2.onMessage(TcpCommunicationSpi.java:330)
at org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:270)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:107)
at org.apache.ignite.internal.util.nio.GridNioCodecFilter.onMessageReceived(GridNioCodecFilter.java:107)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:107)
at org.apache.ignite.internal.util.nio.GridConnectionBytesVerifyFilter.onMessageReceived(GridConnectionBytesVerifyFilter.java:123)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:107)
at org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onMessageReceived(GridNioServer.java:2332)
at org.apache.ignite.internal.util.nio.GridNioFilterChain.onMessageReceived(GridNioFilterChain.java:173)
at org.apache.ignite.internal.util.nio.GridNioServer$DirectNioClientWorker.processRead(GridNioServer.java:918)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:1583)
... 4 more
[04:23:19,495][SEVERE][grid-nio-worker-1-#30%testGrid-client2%][TcpCommunicationSpi] Closing NIO session because of unhandled exception.
class org.apache.ignite.internal.util.nio.GridNioException: null
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:1595)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:1516)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1289)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at org.apache.ignite.internal.util.nio.GridNioRecoveryDescriptor.ackReceived(GridNioRecoveryDescriptor.java:195)
at org.apache.ignite.internal.util.nio.GridNioRecoveryDescriptor.onHandshake(GridNioRecoveryDescriptor.java:278)
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2.connected(TcpCommunicationSpi.java:617)
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2.onFirstMessage(TcpCommunicationSpi.java:492)
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2.onMessage(TcpCommunicationSpi.java:540)
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2.onMessage(TcpCommunicationSpi.java:330)
at org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:270)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:107)
at org.apache.ignite.internal.util.nio.GridNioCodecFilter.onMessageReceived(GridNioCodecFilter.java:107)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:107)
at org.apache.ignite.internal.util.nio.GridConnectionBytesVerifyFilter.onMessageReceived(GridConnectionBytesVerifyFilter.java:113)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:107)
at org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onMessageReceived(GridNioServer.java:2332)
at org.apache.ignite.internal.util.nio.GridNioFilterChain.onMessageReceived(GridNioFilterChain.java:173)
at org.apache.ignite.internal.util.nio.GridNioServer$DirectNioClientWorker.processRead(GridNioServer.java:918)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:1583)
I run server node on AIX with default configuration from Ignite 1.7.0 distr. And I run client node on Win7 with following configuration:
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="gridName" value="testGrid-client2"/>
<property name="clientMode" value="true"/>
<!--<property name="peerClassLoadingEnabled" value="true"/>-->
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
<property name="addresses">
<list>
<value>10.xxx.xxx.xxx:47500..47509</value>
</list>
</property>
</bean>
</property>
</bean>
where 10.xxx.xxx.xxx is IP address of my AIX machine.
Big Endian architecture was supported pretty decent time ago. Presently, Ignite automatically detects an underlying platform endianness and switches to specific modes of operation. As far as I remember, Apache Ignite community members confirmed that Ignite works on Solaris and Power PC.