We are currently using GridGain community Edition 8.8.10. We have setup the Ignite Cluster in Kubernetes using the Ignite operator. The cluster consists of 2 nodes with native persistence enabled and we are using thick client to connect to the Ignite cluster . The clients are also deployed in the same Kubernetes Cluster. The memory configuration of the Cluster is as follows :
-DIGNITE_WAL_MMAP=false -DIGNITE_QUIET=false -Xms6g -Xmx6g -XX:+AlwaysPreTouch -XX:+UseG1GC -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="name" value="Knowledge_Region"/>
<!-- Memory region of 20 MB initial size. -->
<property name="initialSize" value="#{20 * 1024 * 1024}"/>
<!-- Maximum size is 9 GB -->
<property name="maxSize" value="#{9L * 1024 * 1024 * 1024}"/>
<!-- Enabling eviction for this memory region. -->
<property name="pageEvictionMode" value="RANDOM_2_LRU"/>
<property name="persistenceEnabled" value="true"/>
<!-- Enabling SEGMENTED_LRU page replacement for this region. -->
<property name="pageReplacementMode" value="SEGMENTED_LRU"/>
</bean>
We are using the Ignite String function to query the cache. The Cache structure is as follows:
#QuerySqlField(index = true, inlineSize = 100)
private String value;
#QuerySqlField(name = "label", index = true, inlineSize = 100)
private String label;
#QuerySqlField(name = "type", index = true, inlineSize = 100)
#AffinityKeyMapped
private String type;
private String typeLabel;
private List<String> synonyms;
The SQL Query which we are using to get the data is as follows :
select _key, _val from TESTCACHEVALUE USE INDEX(TESTCACHEVALUE_label_IDX) WHERE REGEXP_LIKE(label, 'unit.*s.*','i') LIMIT 8
The Query Plan it is getting generated:
[05:04:56,613][WARNING][long-qry-#36][LongRunningQueryManager] Query execution is too long [duration=1124ms, type=MAP, distributedJoin=false, enforceJoinOrder=false, lazy=false, schema=staging_infrastructuretesting_business_object, sql='SELECT
"__Z0"."_KEY" AS "__C0_0",
"__Z0"."_VAL" AS "__C0_1"
FROM "staging_infrastructuretesting_business_object"."TESTCACHEVALUE" AS "__Z0" USE INDEX ("TESTCACHEVALUE_LABEL_IDX")
WHERE REGEXP_LIKE("__Z0"."LABEL", 'uni.*', 'i') FETCH FIRST 8 ROWS ONLY', plan=SELECT
__Z0._KEY AS __C0_0,
__Z0._VAL AS __C0_1
FROM staging_infrastructuretesting_business_object.TESTCACHEVALUE __Z0 USE INDEX (TESTCACHEVALUE_LABEL_IDX)
/* staging_infrastructuretesting_business_object.TESTCACHEVALUE.__SCAN_ */
/* scanCount: 289643 */
/* lookupCount: 1 */
WHERE REGEXP_LIKE(__Z0.LABEL, 'uni.*', 'i')
FETCH FIRST 8 ROWS ONLY
As I can see the Query is going for full scan and not using the Index specified in the Query.
The cache contains 5 million Objects.
The memory statistics of the Cluster is as follows :
^-- Node [id=d87d1212, uptime=00:30:00.229]
^-- Cluster [hosts=6, CPUs=20, servers=2, clients=4, topVer=12, minorTopVer=25]
^-- Network [addrs=[10.57.5.10, 127.0.0.1], discoPort=47500, commPort=47100]
^-- CPU [CPUs=1, curLoad=16%, avgLoad=38.3%, GC=0%]
^-- Heap [used=4265MB, free=30.58%, comm=6144MB]
^-- Off-heap memory [used=4872MB, free=58.58%, allocated=11564MB]
^-- Page memory [pages=620072]
^-- sysMemPlc region [type=internal, persistence=true, lazyAlloc=false,
... initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=99.96%, allocRam=100MB, allocTotal=0MB]
^-- metastoreMemPlc region [type=internal, persistence=true, lazyAlloc=false,
... initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=99.87%, allocRam=0MB, allocTotal=0MB]
^-- TxLog region [type=internal, persistence=true, lazyAlloc=false,
... initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=100%, allocRam=100MB, allocTotal=0MB]
^-- volatileDsMemPlc region [type=internal, persistence=false, lazyAlloc=true,
... initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=100%, allocRam=0MB]
^-- Default_Region region [type=default, persistence=true, lazyAlloc=true,
... initCfg=20MB, maxCfg=9216MB, usedRam=4781MB, freeRam=48.12%, allocRam=9216MB, allocTotal=4753MB]
^-- Ignite persistence [used=4844MB]
^-- Outbound messages queue [size=0]
^-- Public thread pool [active=0, idle=0, qSize=0]
^-- System thread pool [active=0, idle=8, qSize=0]
^-- Striped thread pool [active=0, idle=8, qSize=0]
From the memory snapshot it seems like we have enough memory in the Cluster.
What I have tried so far.
Index hint in the Query
Applied limit to the Query
Partitioned Cache with Query parallelism 3
SkipReducer on update True
OnheapCacheEnabled set to True
Not sure why the Query is taking time. Please let me know if i have missed anything.
One observation from the Query execution plan the time taken is around 2 secs but on the client side getting response in 5 sec.
Thanks in advance.
It seems you are missing the fact that Apache Ignite SQL engine leverages B+Tree data structure internally. B+Tree relies on some "order" of stored objects (there should be a way to compare them). The only case of a textual search that could be handled with this structure is the prefix search because it establishes a branching condition for the search algorithm. Here is the example:
select _key, _val from TESTCACHEVALUE WHERE label LIKE 'unit%'
In that case you would see the TESTCACHEVALUE_label_IDX index being used even without a hint.
For your scenario REGEXP_LIKE is just an iteration applying Matcher.find() to the label one by one.
Try the Ignite Text Query machinery. It's based on Apache Lucene and looks more suitable for the case.
No!!! You don't want to use index for your problem. Using index will only further delay your query.
It will proceed with top to bottom parsing doing full scan.
The below query should work.
select _key, _val from TESTCACHEVALUE WHERE label LIKE 'unit.%s.%'
I have set maxmemory of 4 G to redis server and eviction policy is set to volatile-lru. Currently it is using about 4.41G of memory. I don't know how this is possible. As the eviction policy is set so it should start evicting keys as soon as memory hits to max-memory.
I am running redis in cluster mode having the configuration of 3 master and replication factor of 1. This is happening on only one of slave redis.
The output of
redis-cli info memory
is :-
# Memory
used_memory:4734647320
used_memory_human:4.41G
used_memory_rss:4837548032
used_memory_rss_human:4.51G
used_memory_peak:4928818072
used_memory_peak_human:4.59G
used_memory_peak_perc:96.06%
used_memory_overhead:2323825684
used_memory_startup:1463072
used_memory_dataset:2410821636
used_memory_dataset_perc:50.93%
allocator_allocated:4734678320
allocator_active:4773904384
allocator_resident:4844134400
total_system_memory:32891367424
total_system_memory_human:30.63G
used_memory_lua:37888
used_memory_lua_human:37.00K
used_memory_scripts:0
used_memory_scripts_human:0B
number_of_cached_scripts:0
maxmemory:4294967296
maxmemory_human:4.00G
maxmemory_policy:volatile-lru
allocator_frag_ratio:1.01
allocator_frag_bytes:39226064
allocator_rss_ratio:1.01
allocator_rss_bytes:70230016
rss_overhead_ratio:1.00
rss_overhead_bytes:-6586368
mem_fragmentation_ratio:1.02
mem_fragmentation_bytes:102920560
mem_not_counted_for_evict:0
mem_replication_backlog:1048576
mem_clients_slaves:0
mem_clients_normal:1926964
mem_aof_buffer:0
mem_allocator:jemalloc-5.1.0
active_defrag_running:0
lazyfree_pending_objects:0
It is important to understand that the eviction process works like this:
A client runs a new command, resulting in more data added.
Redis checks the memory usage, and if it is greater than the
maxmemory limit , it evicts keys according to the policy.
A new command is executed, and so forth.
So we continuously cross the boundaries of the memory limit, by going over it, and then by evicting keys to return back under the limits.
If a command results in a lot of memory being used (like a big set intersection stored into a new key) for some time the memory limit can be surpassed by a noticeable amount.
Reference: https://redis.io/topics/lru-cache
Moderate workload on 3 node ignite cluster causes one node to fail with stripped pool startvation while archiving WAL.
This happens one or two times in week.
I already checked all IO problems which could hang WAL rollover. But this issue still persist
I am using latest ignite 2.7 as a library inside spring boot application
: >>> Possible starvation in striped pool.
Deadlock: false
Completed: 1397
Thread [name="sys-stripe-7-#8%server.node%", id=22, state=WAITING, blockCnt=3, waitCnt=757]
Lock [object=java.util.concurrent.locks.ReentrantLock$NonfairSync#b01791b, ownerName=sys-#214%server.node%, ownerId=248]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
at o.a.i.i.processors.cache.persistence.wal.FileWriteAheadLogManager$FileWriteHandle.awaitNext(FileWriteAheadLogManager.java:2871)
at o.a.i.i.processors.cache.persistence.wal.FileWriteAheadLogManager$FileWriteHandle.access$2300(FileWriteAheadLogManager.java:2451)
at o.a.i.i.processors.cache.persistence.wal.FileWriteAheadLogManager.rollOver(FileWriteAheadLogManager.java:1205)
at o.a.i.i.processors.cache.persistence.wal.FileWriteAheadLogManager.log(FileWriteAheadLogManager.java:836)
at o.a.i.i.processors.cache.GridCacheMapEntry.logUpdate(GridCacheMapEntry.java:4267)
at o.a.i.i.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.update(GridCacheMapEntry.java:6333)
at o.a.i.i.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.call(GridCacheMapEntry.java:6082)
at o.a.i.i.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.call(GridCacheMapEntry.java:5782)
at o.a.i.i.processors.cache.persistence.tree.BPlusTree$Invoke.invokeClosure(BPlusTree.java:3719)
at o.a.i.i.processors.cache.persistence.tree.BPlusTree$Invoke.access$5900(BPlusTree.java:3613)
at o.a.i.i.processors.cache.persistence.tree.BPlusTree.invokeDown(BPlusTree.java:1895)
at o.a.i.i.processors.cache.persistence.tree.BPlusTree.invoke(BPlusTree.java:1779)
at o.a.i.i.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke0(IgniteCacheOffheapManagerImpl.java:1638)
at o.a.i.i.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1621)
at o.a.i.i.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.invoke(GridCacheOffheapManager.java:1935)
at o.a.i.i.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:428)
at o.a.i.i.processors.cache.GridCacheMapEntry.innerUpdate(GridCacheMapEntry.java:2295)
at o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.processDhtAtomicUpdateRequest(GridDhtAtomicCache.java:3242)
at o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$600(GridDhtAtomicCache.java:135)
at o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$7.apply(GridDhtAtomicCache.java:309)
at o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$7.apply(GridDhtAtomicCache.java:304)
at o.a.i.i.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1056)
at o.a.i.i.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:581)
at o.a.i.i.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:380)
at o.a.i.i.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:306)
at o.a.i.i.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:101)
at o.a.i.i.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:295)
at o.a.i.i.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1569)
at o.a.i.i.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1197)
at o.a.i.i.managers.communication.GridIoManager.access$4200(GridIoManager.java:127)
at o.a.i.i.managers.communication.GridIoManager$9.run(GridIoManager.java:1093)
at o.a.i.i.util.StripedExecutor$Stripe.body(StripedExecutor.java:505)
at o.a.i.i.util.worker.GridWorker.run(GridWorker.java:120)
at java.lang.Thread.run(Thread.java:748)
ERROR --- [tcp-disco-msg-worker-#2%server.node%] [] o.a.i.i.u.t.G : Blocked system-critical thread has been detected. This can lead to cluster-wide undefined behaviour [threadName=sys-stripe-1, blockedFor=10s]
WARN --- [tcp-disco-msg-worker-#2%server.node%] [] o.a.i.i.u.t.G : Thread [name="sys-stripe-1-#2%server.node%", id=16, state=WAITING, blockCnt=0, waitCnt=754]
Lock [object=java.util.concurrent.locks.ReentrantLock$NonfairSync#b01791b, ownerName=sys-#214%server.node%, ownerId=248]
Failure Detection feature is not very well configured in Apache Ignite 2.7 by default. You can turn it off (by setting to NoOp) or set a large failureDetectionTimeout to avoid such messages (and shutdown of nodes).
there:
I got the error when use ignite cache.
My system select a master node use zookeeper,and has many slave nodes.The master process ignite cache expired values and put in an ignite queue.The slave node provide data into ignite cache use streamer.addData(k,v) and consume the ignite queue.
My code is:
ignite cache and streamer :
// use zookeeper IpFinder
ignite = Ignition.getOrStart(igniteConfiguration);
igniteCache = ignite.getOrCreateCache(cacheConfiguration);
igniteCache.registerCacheEntryListener(new MutableCacheEntryListenerConfiguration<>(
(Factory<CacheEntryListener<K, CountValue>>)() -> (CacheEntryExpiredListener<K, CountValue>)this
::onCacheExpired, null, true, true));
//onCacheExpired master resolve the expired entry and put in igniteQueue
cacheConfiguration.setExpiryPolicyFactory(CreatedExpiryPolicy.factoryOf(Duration.ONE_MINUTE));
igniteDataStreamer = ignite.dataStreamer(igniteCache.getName());
igniteDataStreamer.deployClass(BaseIgniteStreamCount.class);
igniteDataStreamer.allowOverwrite(true);
igniteDataStreamer.receiver(StreamTransformer.from((CacheEntryProcessor<K, CountValue, Object>)(e, arg) -> {
// process the value.
return null;
}));
master process the entry expired from the cache,and put in ignite queue:
CollectionConfiguration collectionConfiguration = new CollectionConfiguration().setCollocated(true);
queue = ignite.queue(igniteQueueName, 0, collectionConfiguration);
the slaves consume the queue.
but i got error log below after running hours later:
2017-09-14 17:06:45,256 org.apache.ignite.logger.java.JavaLogger warning
WARNING: >>> Possible starvation in striped pool.
Thread name: sys-stripe-6-#7%ignite%
Queue: []
Deadlock: false
Completed: 77168
Thread [name="sys-stripe-6-#7%ignite%", id=134, state=WAITING, blockCnt=0, waitCnt=68842]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
at o.a.i.i.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:176)
at o.a.i.i.util.future.GridFutureAdapter.get(GridFutureAdapter.java:139)
at o.a.i.i.processors.continuous.GridContinuousProcessor.addNotification(GridContinuousProcessor.java:935)
at o.a.i.i.processors.cache.query.continuous.CacheContinuousQueryHandler.onEntryUpdate(CacheContinuousQueryHandler.java:850)
at o.a.i.i.processors.cache.query.continuous.CacheContinuousQueryHandler.access$700(CacheContinuousQueryHandler.java:82)
at o.a.i.i.processors.cache.query.continuous.CacheContinuousQueryHandler$1.onEntryUpdated(CacheContinuousQueryHandler.java:413)
at o.a.i.i.processors.cache.query.continuous.CacheContinuousQueryManager.onEntryExpired(CacheContinuousQueryManager.java:429)
at o.a.i.i.processors.cache.GridCacheMapEntry.onExpired(GridCacheMapEntry.java:3046)
at o.a.i.i.processors.cache.GridCacheMapEntry.onTtlExpired(GridCacheMapEntry.java:2961)
at o.a.i.i.processors.cache.GridCacheTtlManager$1.applyx(GridCacheTtlManager.java:61)
at o.a.i.i.processors.cache.GridCacheTtlManager$1.applyx(GridCacheTtlManager.java:52)
at o.a.i.i.util.lang.IgniteInClosure2X.apply(IgniteInClosure2X.java:38)
at o.a.i.i.processors.cache.IgniteCacheOffheapManagerImpl.expire(IgniteCacheOffheapManagerImpl.java:1007)
at o.a.i.i.processors.cache.GridCacheTtlManager.expire(GridCacheTtlManager.java:198)
at o.a.i.i.processors.cache.GridCacheTtlManager.expire(GridCacheTtlManager.java:160)
at o.a.i.i.processors.cache.GridCacheUtils.unwindEvicts(GridCacheUtils.java:854)
at o.a.i.i.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1073)
at o.a.i.i.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:561)
at o.a.i.i.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:378)
at o.a.i.i.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:304)
at o.a.i.i.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:99)
at o.a.i.i.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:293)
at o.a.i.i.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556)
at o.a.i.i.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1184)
at o.a.i.i.managers.communication.GridIoManager.access$4200(GridIoManager.java:126)
at o.a.i.i.managers.communication.GridIoManager$9.run(GridIoManager.java:1097)
at o.a.i.i.util.StripedExecutor$Stripe.run(StripedExecutor.java:483)
at java.lang.Thread.run(Thread.java:745)
Striped pool is responsible for messages processing. This warning tells you that there is no progress happening on some of the stripes. It may happen due to a bad network connection or when you put massive objects to a cache or a queue.
You may find more information about it in these threads:
http://apache-ignite-users.70518.x6.nabble.com/Possible-starvation-in-striped-pool-td14892.html
http://apache-ignite-users.70518.x6.nabble.com/Possible-starvation-in-striped-pool-message-td15993.html
we use aerospike in our projects and caught strange problem.
We have a 3 node cluster and after some node restarting it stop working.
So, we make test to explain our problem
We make test cluster. 3 node, replication count = 2
Here is our namespace config
namespace test{
replication-factor 2
memory-size 100M
high-water-memory-pct 90
high-water-disk-pct 90
stop-writes-pct 95
single-bin true
default-ttl 0
storage-engine device {
cold-start-empty true
file /tmp/test.dat
write-block-size 1M
}
We write 100Mb test data after that we have that situation
available pct equal about 66% and Disk Usage about 34%
All good :slight_smile:
But we stopped one node. After migration we see that available pct = 49% and disk usage 50%
Return node to cluster and after migration we see that disk usage became previous about 32%, but available pct on old nodes stay 49%
Stop node one more time
available pct = 31%
Repeat one more time we get that situation
available pct = 0%
Our cluster crashed, Clients get AerospikeException: Error Code 8: Server memory error
So how we can clean available pct?
If your defrag-q is empty (and you can see whether it is from grepping the logs) then the issue is likely to be that your namespace is smaller than your post-write-queue. Blocks on the post-write-queue are not eligible for defragmentation and so you would see avail-pct trending down with no defragmentation to reclaim the space. By default the post-write-queue is 256 blocks and so in your case that would equate to 256Mb. If your namespace is smaller than that you will see avail-pct continue to drop until you hit stop-writes. You can reduce the size of the post-write-queue dynamically (i.e. no restart needed) using the following command, here I suggest 8 blocks:
asinfo -v 'set-config:context=namespace;id=<NAMESPACE>;post-write-queue=8'
If you are happy with this value you should amend your aerospike.conf to include it so that it persists after a node restart.