I am using off-heap and persistent is disabled. My data region max size is 20MB. I am using RANDOM_2_LRU as page eviction mode. when eviction happens I want to listen to the EVT_CACHE_ENTRY_EVICTED. I have written the following code but its not working.I saw in ignite logs it say page-based eviction started.
CacheEntryEvictionListener listener = new CacheEntryEvictionListener(logger);
ignite.events(ignite.cluster().forCacheNodes(cacheName)).localListen(listener,EventType.EVT_CACHE_ENTRY_EVICTED);
IgniteBiPredicate<UUID, CacheEvent> biPredicate = IgniteUtil.getCacheEventBiPredicate(cacheName);
IgnitePredicate<CacheEvent> predicate = IgniteUtil.getCacheEventPredicate(cacheName);
// Process remote events
ignite.events(ignite.forCacheNodes(cacheName)).remoteListen(biPredicate, predicate, EventType.EVT_CACHE_ENTRY_EVICTED);
EVT_CACHE_ENTRY_EVICTED is triggered when there's an on-heap eviction. This might happen if you have a near-cache or an on-heap cache.
Unfortunately, there is no event for off-heap eviction. (A possible reason for this is that eviction is performed at a page rather than a record level.)
Related
I am unable to understand how the two attributes differ:'memoryLimit' and 'maxPageSize'
As per documentation:
'maxPageSize' = 'maximum number of persistent messages to page from store at a time'
'memoryLimit' - corresponds to the amount of memory that's assigned to the in-memory store for the queue
Here is a sample configuration for a queue :
<policyEntry queue="Consumer.normal.queue" producerFlowControl="true" memoryLimit="3200" maxPageSize="4"
maxBrowsePageSize="1000" prioritizedMessages="true" useCache="false" expireMessagesPeriod="0" queuePrefetch="1">
what I have observed is that if the maxPageSize =1 and memoryLimit = "3200" then I can see 2 messages loaded into memory and can be browsed via a jms client ( rest of the messages get stored in kahadb )
however if the maxPageSize = 4 and memoryLimit = "3200" then I can see 4 messages loaded into memory and can be browsed via a jms client
So are the two values meant to serve the same purpose ?
AND
does it mean that whichever of the these two attributes provides the greater number of messages will be used by activemq ?
maxPageSize determines how many messages ActiveMQ loads from the store (in your case, KahaDB) to hand to consumers. The memoryLimit indicates how much memory to allocate to keep messages in memory.
In short, (message size x maxPageSize <= memoryLimit) so that you do not hit producer flow control.
You want your page size to be much higher than 1 or 2 for ActiveMQ to perform (200 to 1000 to start). Numbers that low will have higher latency.
Note: Priority is an anti-pattern in distributed messaging at significant load (over 1M messages per day). It works well in a local embedded broker within your Java VM process. ActiveMQ disables it by default.
To enable priority support, update the <destinationPolicy queue=".." and add this attribute: prioritizedMessages="true"
My application just got freeze because the memory usage of rabbitmq exceeded its threshold.
I am using pika and pyrabbit as a python wrappers for handling channels and connections.
I wander if there is a way that my process will register to something and get a notification when that event occurs (and hopefully even a bit before it does).
When using rabbitpy you can check if the blocked flag is set. This flag means that the connection is being blocked due to resource constraints (most likely due to low memory).
with rabbitpy.Connection('amqp://guest:guest#localhost:5672/%2f') as conn:
print(conn.blocked)
e.g.
while conn.blocked:
time.sleep(0.1) # wait until connection is unblocked
can we apply the customize off-heap eviction policy based upon cache attributes? (For example - Suppose, We store the Employee POJO object in the cache where status attribute value is true/false, is it possible to evict the records from the cache based upon status attribute? )
As per Apache Ignite Documentation, We can customize only on-heap eviction policy(By EvictionPolicy interface).is it possible to customize the PageEvictionMode also?
// Enabling RANDOM_2_LRU eviction for this region.
regionCfg.setPageEvictionMode(DataPageEvictionMode.RANDOM_2_LRU);
Page eviction algorithm is much more complicated, than the one for on-heap entries. And unfortunately, as a consequence, it is less configurable.
DataPageEvictionMode is a enum. There are only three possible values for it: DISABLED, RANDOM_LRU and RANDOM_2_LRU.
You can find their descriptions by the following link:
https://apacheignite.readme.io/docs/evictions
Page eviction based on entries' attributes is impossible since entries are distributed among pages in nearly random order. You can't tell the page memory to remove some particular entry. Only the whole page can be evicted.
I'm fairly new to Gigaspaces. I am using a polling container to fetch events from a space and then dispatch these over a HTTPS connection. If the server endpoint for the connection becomes unavailable, I need to update the state of the event objects to 'blocked' and re-queue them in the space for later retries (for which I have a separate polling container that specifically looks for the blocked events).
What I'm struggling with is finding a good way to ensure the blocked event polling container does not over-rotate on the blocked events (that is, read the events, discover that the endpoint is still blocked, write them back to the space and then immediately re-read them).
Is there a way I could build in a delay in re-reading the events from the space. Options might include:
Setting/updating a timestamp on the object before writing it back, and then comparing this with the current time within the polling process (for this, I expect I would have to use a SQLQuery which includes SYSDATE as the EventTemplate, but would I have to query SYSDATE out of the space every time I want to update the object rather than using System.currentTimeMillias or equivalent, in order to ensure I am comparing apples to apples?)
Applying a configuration setting of some kind on the blocked event polling container or listener that causes it to only poll periodically.
You can use both approach:
docs.gigaspaces.com/xap97/polling-container.html#dynamic-template-definition
docs.gigaspaces.com/sbp/dynamic-polling-container-templates-using-triggeroperationhandler.html
In the future, for GigaSpaces related questions, please use:
ask.gigaspaces.org/questions/
Thanks,
Ester.
I am using apache servicemix and apache activeMQ in my product. Here in case of HttpConnector i found a performance issue.
If i increase number of total request at a time then as the number will increase exchange queue gets stuck on any of the component. After all the requests start processing, most of the exchange gets stuck at the end component or at the dispatcher* component. Container's heap size reaches to very high and after some time it crashes and restarts automatically.
I have used transactionmanager also. flowname is also mentioned as jms.
need some solution urgently.
You might try using KahaDB instead of the "amqPersistenceAdapter". We saw a huge throughput increase just by switching to this.
here is the config we used (which is highly dependent on your application, but make sure "enableJournalDiskSyncs" is set to false)
<persistenceAdapter>
<kahaDB directory="../data/kaha"
enableJournalDiskSyncs="false"
indexWriteBatchSize="10000"
indexCacheSize="1000" />
</persistenceAdapter>
If i increase number of total request at a time then as the number will increase exchange queue gets stuck on any of the component.
ActiveMQ has a mechanism to stop producer writing messages, it is called "flow controll", seems that you producer is faster than consumer (or consumer is not stable it its speed), so first check the memoryLimit config for your AMQ (also defining it or special queue possible). Try to increase it.
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic="FOO.>" producerFlowControl="false" memoryLimit="1mb">
<dispatchPolicy>
<strictOrderDispatchPolicy/>
</dispatchPolicy>
<subscriptionRecoveryPolicy>
<lastImageSubscriptionRecoveryPolicy/>
</subscriptionRecoveryPolicy>
</policyEntry>
</policyEntries>
</policyMap>
Also, you can disable that stopping of processing incoming messages with producerFlowControl="false" option. So, in case when all buffer will be used, AMQ will slow down and all messages will be stored to HD. More info Producer Flow Control and Message Cursors
Container's heap size reaches to very high and after some time it crashes and restarts automatically.
But anyway, it is just way of tunning of your app, it is not solution, because always you will have the case when some resources run out :)
You should limit the incoming requests or balance them f.e. using Pound