I need nodes to join the baseline topology automatically so that they get an equal share of the data. This has to happen programmatically without resorting to the control.sh script.
When the 2nd node starts, the first one reports
[11:17:55] Joining node doesn't have stored group keys [node=eb8e1b5e-9c1a-4272-84ea-08a1a89a4fb8]
[11:17:55] Topology snapshot [ver=2, locNode=9747b94a, servers=2, clients=0, state=ACTIVE, CPUs=12, offheap=6.2GB, heap=4.0GB]
[11:17:55] ^-- Baseline [id=0, size=1, online=1, offline=0]
and the new node
[11:17:55] Ignite node started OK (id=eb8e1b5e)
[11:17:55] Topology snapshot [ver=2, locNode=eb8e1b5e, servers=2, clients=0, state=ACTIVE, CPUs=12, offheap=6.2GB, heap=4.0GB]
[11:17:55] ^-- Baseline [id=0, size=1, online=1, offline=0]
I read that the auto-adjust was needed so I start the nodes this way
// Starting the node
ignite = Ignition.start(cfg);
ignite.cluster().baselineAutoAdjustEnabled(true);
ignite.cluster().baselineAutoAdjustTimeout(30000);
ignite.cluster().state(ClusterState.ACTIVE);
What is the meaning of "Joining node doesn't have stored group keys"?
See code here
The message is related to TDE, it means that there are no cache groups with configured encryption on the joining node.
Actually, it shouldn't be printed in the case when Encryption is turned off, I've created a JIRA ticket for this issue: https://issues.apache.org/jira/browse/IGNITE-16854
Related
My work system consists of Spring web applications and it uses Redis as a transaction counter and it conditionally blocks transaction requests.
The transaction is as follows:
Check whether or not data exists. (HGET)
If it doesn't, saves new one with count 0 and set expiration time. (HSET, EXPIRE)
Increases a count value. (INCRBY)
If the increased count value reaches a specific configured limit, it sets the transaction to 'blocked' (HSET)
The limit value is my company's business policy.
Such read and write operations are requested one after another, immediately.
Currently, I use one Redis instance at one machine. (only master, no replications.)
I want to get Redis HA, so I need slave insntaces but at the same time, I want to have all reads and writes to Redis only to master insntaces because of slave data relication latency.
After some research, I found that it is a good idea to have a proxy server to use Redis HA. However, with proxy, it seems impossible to use only the master instances to receive requests and the slaves only for failover.
Is it possible??
Thanks in advance.
What you need is Redis Sentinel.
With Redis Sentinel, you can get the master address from sentinel, and read/write with master. If master is down, Redis Sentinel will do failover, and elect a new master. Then you can get the address of the new master from sentinel.
As you're going to use Lettuce for Redis cluster driver, you should set read preference to Master and things should be working fine a sample code might look like this.
LettuceClientConfiguration lettuceClientConfiguration =
LettuceClientConfiguration.builder().readFrom(ReadFrom.MASTER).build();
RedisClusterConfiguration redisClusterConfiguration = new RedisClusterConfiguration();
List<RedisNode> redisNodes = new ArrayList<>();
redisNodes.add(new RedisNode("127.0.0.1", 9000));
redisNodes.add(new RedisNode("127.0.0.1", 9001));
redisNodes.add(new RedisNode("127.0.0.1", 9002));
redisNodes.add(new RedisNode("127.0.0.1", 9003));
redisNodes.add(new RedisNode("127.0.0.1", 9004));
redisNodes.add(new RedisNode("127.0.0.1", 9005));
redisClusterConfiguration.setClusterNodes(redisNodes);
LettuceConnectionFactory lettuceConnectionFactory =
new LettuceConnectionFactory(redisClusterConfiguration, lettuceClientConfiguration);
lettuceConnectionFactory.afterPropertiesSet();
See in action at Redis Cluster Configuration
Ignite 2.8.0, I enable persistent, code like this:
IgniteConfiguration igniteCfg = new IgniteConfiguration();
//igniteCfg.setClientMode(true);
DataStorageConfiguration dataStorageCfg = new DataStorageConfiguration();
dataStorageCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);
igniteCfg.setDataStorageConfiguration(dataStorageCfg);
Ignite ignite = Ignition.start(igniteCfg);
Then some exception like below:
Caused by: class org.apache.ignite.spi.IgniteSpiException: Joining persistence node to in-memory cluster couldn't be allowed due to baseline auto-adjust is enabled and timeout equal to 0
at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.checkFailedError(TcpDiscoverySpi.java:1997)
at org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:1116)
at org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:427)
at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2099)
at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:297)
... 15 more
Anyony can help me?
Thanks.
After starting first node, invoke ignite.cluster().baselineAutoAdjustEnabled(false)
You can also use bin/control.(sh|bat) --baseline auto_adjust [disable|enable] [timeout <timeoutMillis>] [--yes]
Please note that we don't recommend running mixed persistent~non-persistent clusters since they see very few testing. If you must, make sure that data regions have the same persistenceEnabled settings on all nodes.
I have set maxmemory of 4 G to redis server and eviction policy is set to volatile-lru. Currently it is using about 4.41G of memory. I don't know how this is possible. As the eviction policy is set so it should start evicting keys as soon as memory hits to max-memory.
I am running redis in cluster mode having the configuration of 3 master and replication factor of 1. This is happening on only one of slave redis.
The output of
redis-cli info memory
is :-
# Memory
used_memory:4734647320
used_memory_human:4.41G
used_memory_rss:4837548032
used_memory_rss_human:4.51G
used_memory_peak:4928818072
used_memory_peak_human:4.59G
used_memory_peak_perc:96.06%
used_memory_overhead:2323825684
used_memory_startup:1463072
used_memory_dataset:2410821636
used_memory_dataset_perc:50.93%
allocator_allocated:4734678320
allocator_active:4773904384
allocator_resident:4844134400
total_system_memory:32891367424
total_system_memory_human:30.63G
used_memory_lua:37888
used_memory_lua_human:37.00K
used_memory_scripts:0
used_memory_scripts_human:0B
number_of_cached_scripts:0
maxmemory:4294967296
maxmemory_human:4.00G
maxmemory_policy:volatile-lru
allocator_frag_ratio:1.01
allocator_frag_bytes:39226064
allocator_rss_ratio:1.01
allocator_rss_bytes:70230016
rss_overhead_ratio:1.00
rss_overhead_bytes:-6586368
mem_fragmentation_ratio:1.02
mem_fragmentation_bytes:102920560
mem_not_counted_for_evict:0
mem_replication_backlog:1048576
mem_clients_slaves:0
mem_clients_normal:1926964
mem_aof_buffer:0
mem_allocator:jemalloc-5.1.0
active_defrag_running:0
lazyfree_pending_objects:0
It is important to understand that the eviction process works like this:
A client runs a new command, resulting in more data added.
Redis checks the memory usage, and if it is greater than the
maxmemory limit , it evicts keys according to the policy.
A new command is executed, and so forth.
So we continuously cross the boundaries of the memory limit, by going over it, and then by evicting keys to return back under the limits.
If a command results in a lot of memory being used (like a big set intersection stored into a new key) for some time the memory limit can be surpassed by a noticeable amount.
Reference: https://redis.io/topics/lru-cache
I have setup an ignite 2.3 server node along with 32 clients nodes. After running the multiple query, I have been observed Out Of Memory Error in server node logs.
Server Configuration:
Configure 4 GB java max heap memory.
Ignite Persistence is disabled
Using default data region.
Using Spring Data for apply the query on ignite node.
Captured memory snapshots of ignite server node.
- Node [id= =44:33:12.948]
^-- H/N/C [hosts=32, nodes=32, CPU =39]
^-- CPU [cur=3.7%, avg=0.23%, G C =0%]
^-- Page Memory [pages=303325]
^-- Heap [used=2404 MB, free=36.21%, comm=3769 MB]
^-- Non heap [used=78 MB, free=-1%, comm=80 MB]
^-- Public thread pool [active=0, idle=0, q Size =0]
^-- System thread pool [active=0, idle=6, q Size=0]
^-- Outbound messages queue [size=0]
Heap Dump Logs Analysis :
query-#8779
at java.nio.Bits$1.newDirectByteBuffer(JILjava/lang/Object;)Ljava/nio/ByteBuffer; (Bits.java:758)
at org.apache.ignite.internal.util.GridUnsafe.wrapPointer(JI)Ljava/nio/ByteBuffer; (GridUnsafe.java:113)
at org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl.pageBuffer(J)Ljava/nio/ByteBuffer; (PageMemoryNoStoreImpl.java:253)
at org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(Lorg/apache/ignite/internal/processors/cache/CacheGroupContext;Lorg/apache/ignite/internal/processors/cache/GridCacheSharedContext;Lorg/apache/ignite/internal/pagemem/PageMemory;Lorg/apache/ignite/internal/processors/cache/persistence/CacheDataRowAdapter$RowData;)V (CacheDataRowAdapter.java:167)
at org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(Lorg/apache/ignite/internal/processors/cache/CacheGroupContext;Lorg/apache/ignite/internal/processors/cache/persistence/CacheDataRowAdapter$RowData;)V (CacheDataRowAdapter.java:102)
at org.apache.ignite.internal.processors.query.h2.database.H2RowFactory.getRow(J)Lorg/apache/ignite/internal/processors/query/h2/opt/GridH2Row; (H2RowFactory.java:62)
at org.apache.ignite.internal.processors.query.h2.database.io.H2ExtrasLeafIO.getLookupRow(Lorg/apache/ignite/internal/processors/cache/persistence/tree/BPlusTree;JI)Lorg/h2/result/SearchRow; (H2ExtrasLeafIO.java:126)
at org.apache.ignite.internal.processors.query.h2.database.io.H2ExtrasLeafIO.getLookupRow(Lorg/apache/ignite/internal/processors/cache/persistence/tree/BPlusTree;JI)Ljava/lang/Object; (H2ExtrasLeafIO.java:36)
at org.apache.ignite.internal.processors.query.h2.database.H2Tree.getRow(Lorg/apache/ignite/internal/processors/cache/persistence/tree/io/BPlusIO;JILjava/lang/Object;)Lorg/apache/ignite/internal/processors/query/h2/opt/GridH2Row; (H2Tree.java:123)
at org.apache.ignite.internal.processors.query.h2.database.H2Tree.getRow(Lorg/apache/ignite/internal/processors/cache/persistence/tree/io/BPlusIO;JILjava/lang/Object;)Ljava/lang/Object; (H2Tree.java:40)
at org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$ForwardCursor.fillFromBuffer(JLorg/apache/ignite/internal/processors/cache/persistence/tree/io/BPlusIO;II)Z (BPlusTree.java:4548)
at org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$ForwardCursor.nextPage()Z (BPlusTree.java:4641)
at org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$ForwardCursor.next()Z (BPlusTree.java:4570)
at org.apache.ignite.internal.processors.query.h2.H2Cursor.next()Z (H2Cursor.java:78)
at org.h2.index.IndexCursor.next()Z (IndexCursor.java:305)
at org.h2.table.TableFilter.next()Z (TableFilter.java:499)
at org.h2.command.dml.Select$LazyResultQueryFlat.fetchNextRow()[Lorg/h2/value/Value; (Select.java:1452)
at org.h2.result.LazyResult.hasNext()Z (LazyResult.java:79)
at org.h2.result.LazyResult.next()Z (LazyResult.java:59)
at org.h2.command.dml.Select.queryFlat(ILorg/h2/result/ResultTarget;J)Lorg/h2/result/LazyResult; (Select.java:519)
at org.h2.command.dml.Select.queryWithoutCache(ILorg/h2/result/ResultTarget;)Lorg/h2/result/ResultInterface; (Select.java:625)
at org.h2.command.dml.Query.queryWithoutCacheLazyCheck(ILorg/h2/result/ResultTarget;)Lorg/h2/result/ResultInterface; (Query.java:114)
at org.h2.command.dml.Query.query(ILorg/h2/result/ResultTarget;)Lorg/h2/result/ResultInterface; (Query.java:352)
at org.h2.command.dml.Query.query(I)Lorg/h2/result/ResultInterface; (Query.java:333)
at org.h2.command.CommandContainer.query(I)Lorg/h2/result/ResultInterface; (CommandContainer.java:113)
at org.h2.command.Command.executeQuery(IZ)Lorg/h2/result/ResultInterface; (Command.java:201)
at org.h2.jdbc.JdbcPreparedStatement.executeQuery()Ljava/sql/ResultSet; (JdbcPreparedStatement.java:111)
at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQuery(Ljava/sql/Connection;Ljava/sql/PreparedStatement;ILorg/apache/ignite/internal/processors/query/GridQueryCancel;)Ljava/sql/ResultSet; (IgniteH2Indexing.java:961)
at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQueryWithTimer(Ljava/sql/PreparedStatement;Ljava/sql/Connection;Ljava/lang/String;Ljava/util/Collection;ILorg/apache/ignite/internal/processors/query/GridQueryCancel;)Ljava/sql/ResultSet; (IgniteH2Indexing.java:1027)
at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQueryWithTimer(Ljava/sql/Connection;Ljava/lang/String;Ljava/util/Collection;ZILorg/apache/ignite/internal/processors/query/GridQueryCancel;)Ljava/sql/ResultSet; (IgniteH2Indexing.java:1006)
Is there a chance that you have tried to execute SELECT * without WHERE clause or similar request with huge result set? Result set will be retained on heap which will lead to OOM when serving such request.
Either use LIMIT clause in your SQL query, or use lazy=true in your
Connection/SqlFieldsQuery.
I have a Redis cluster with multiple masters and 1 slave per master.
I am using jedis. Redis version is : 3.0.7.
Now when one of the master goes down Jedis is trying to make slave as master and it takes more than 2 seconds. Which is a huge time in the my scenarios.
*Following the two different Exceptions and the Stack :
JedisClusterException & JedisClusterMaxRedirectionsException.
JedisConnectionException!! - exc: redis.clients.jedis.exceptions.JedisClusterException: CLUSTERDOWN The cluster is down
redis.clients.jedis.exceptions.JedisClusterException: CLUSTERDOWN The cluster is down
at redis.clients.jedis.Protocol.processError(Protocol.java:115)
at redis.clients.jedis.Protocol.process(Protocol.java:151)
at redis.clients.jedis.Protocol.read(Protocol.java:205)
at redis.clients.jedis.Connection.readProtocolWithCheckingBroken(Connection.java:297)
at redis.clients.jedis.Connection.getIntegerReply(Connection.java:222)
at redis.clients.jedis.Jedis.incr(Jedis.java:548)
at redis.clients.jedis.JedisCluster$27.execute(JedisCluster.java:319)
at redis.clients.jedis.JedisCluster$27.execute(JedisCluster.java:316)
at redis.clients.jedis.JedisClusterCommand.runWithRetries(JedisClusterCommand.java:119)
at redis.clients.jedis.JedisClusterCommand.run(JedisClusterCommand.java:30)
at redis.clients.jedis.JedisCluster.incr(JedisCluster.java:321)
at com.demo.redisCluster.App.main(App.java:37)
.................................................
JedisClusterMaxRedirectionsException!! - exc: redis.clients.jedis.exceptions.JedisClusterMaxRedirectionsException: Too many Cluster redirections?
redis.clients.jedis.exceptions.JedisClusterMaxRedirectionsException: Too many Cluster redirections?
at redis.clients.jedis.JedisClusterCommand.runWithRetries(JedisClusterCommand.java:97)
at redis.clients.jedis.JedisClusterCommand.runWithRetries(JedisClusterCommand.java:131)
at redis.clients.jedis.JedisClusterCommand.runWithRetries(JedisClusterCommand.java:152)
at redis.clients.jedis.JedisClusterCommand.runWithRetries(JedisClusterCommand.java:131)
at redis.clients.jedis.JedisClusterCommand.runWithRetries(JedisClusterCommand.java:152)
at redis.clients.jedis.JedisClusterCommand.runWithRetries(JedisClusterCommand.java:131)
at redis.clients.jedis.JedisClusterCommand.run(JedisClusterCommand.java:30)
at redis.clients.jedis.JedisCluster.incr(JedisCluster.java:321)
at com.demo.redisCluster.App.main(App.java:37)*