On-Heap Caching – MaxSize in Ignite, what it really means? - ignite

I am bit confused about a parameter "MaxSize" used to configure On-heap caching in Ignite. When considering heap, we always think about the size in terms of memory but I am not sure that’s the case here. Could any one please clarify what is the maxSize Set to 1000000 really means? . The comments above put as 1 million, i.e is kind of number of items for me but that dosn't make much sense for me in the heap.
Here is the link from ignite - https://ignite.apache.org/docs/latest/configuring-caches/on-heap-caching#configuring-eviction-policy
<bean class="org.apache.ignite.cache.CacheConfiguration">
<property name="name" value="myCache"/>
<!-- Enabling on-heap caching for this distributed cache. -->
<property name="onheapCacheEnabled" value="true"/>
<property name="evictionPolicy">
<!-- LRU eviction policy. -->
<bean class="org.apache.ignite.cache.eviction.lru.LruEvictionPolicy">
<!-- Set the maximum cache size to 1 million (default is 100,000). -->
<property name="maxSize" value="1000000"/>
</bean>
</property>
</bean>

Yes, it's about the number of records in a cache.
This makes sense, since you would like to keep only the most frequently used items for your additional on heap caching
You can use setMaxMemorySize to limit cache size in bytes, just as you described.

Related

BinaryObject and Cache Eviction/Expiration

When using BinaryObject for off-heap in-memory-only cache values, do we need to do anything to protect against the cache entry being evicted or expired while accessing fields via BinaryObject::field(String)?
For example, if the cache's data region has the default memory-size eviction (90% full?), or if the cache uses a creation expiry policy, and the region happens to evict entries or the cache expire entries while the code is making several calls to BinaryObject::field(String). Does Ignite automatically ensure that BinaryObject won't access invalid off-heap memory (throwing an exception perhaps), or can the developer use locking / transactions or a "touched" expiry to help prevent this?
Thanks!
BinaryObject instances returned from the Ignite API and accessed by the user code are copies. They do not reference Ignite storage memory directly.
You can work with BinaryObject even after the corresponding cache entry gets evicted.
When a cache object becomes eligible for eviction, it will be removed from memory.
Ignite has several Eviction policies:
Random-LRU
Random-2-LRU
Explained here
This means that if you recently used a cache value(or are using it at the moment that the eviction is taking place), either in BinaryObject representation or not, it will not be evicted. All Eviction algorithms use a LEAST RECENTLY USED algorithm.

Multiple hazelcast instances with its own JVM on single host communicating with other hosts of similar setting

We are currently using a cluster of 4 nodes, each with a memory of 244GB. However, during garbage collection(GC) peaks, the response time increases to 7s. Our clients operate with a response time of 5s. We are hoping to reduce the GC time by using smaller heap. The question is:
How do we run multiple hazelcast instances on each of the nodes with each instance running on its own JVM(hence using smaller heap)?
How do we enable the instances on one host to discover the instances on the other hosts?
Just launch the instance as usual - I'm not sure I understand the problem
Please read the clustering documentation. It can be as simple just launching them, and they will automatically find each other by multicasting. Or you can explicitly configure IPs and ports:
<hazelcast>
<network>
<join>
<multicast enabled="false">
<tcp-ip enabled="true">
<member>machine1</member>
<member>machine2</member>
<member>machine3:5799</member>
<member>192.168.1.0-7</member>
<member>192.168.1.21</member>
</tcp-ip>
</join>
</network>
</hazelcast>

In a WAS Liberty connection pool, can I validate connections on borrow?

We are currently migrating an applications to run on a Liberty server (8.5.5.9). We have found that connections between the app server and the database are occasionally terminated by the firewall, for being idle for an extended period of time. When this happens, on the next HTTP request, the application will receive one of these broken connections.
Previously, we had been using Apache Commons DBCP to manage the connection pool. One of the configuration parameters in a DBCP conneciton pool is to "testOnBorrow", which prevents the application from being handed one of these bad connections.
Is there such a configuration parameter in a Liberty-managed datasource?
So far, we have configured our datasource like this:
<dataSource jndiName="jdbc/ora" type="javax.sql.DataSource">
<properties.oracle
user="example" password="{xor}AbCdEfGh123="
URL="jdbc:oracle:thin:#example.com:1521:mydb"
/>
<connectionManager
minPoolSize="3" maxPoolSize="10" maxIdleTime="10m"
purgePolicy="ValidateAllConnections"
/>
<jdbcDriver id="oracle-driver" libraryRef="oracle-libs"/>
</dataSource>
The purgePolicy currently is set to validate all connections if one bad one is found (e.g., overnight when all connection have been idle for a long time). But all this does is prevent multiple bad connection from being sequentially handed to the applications.
One option in the connectionManager would be to set an agedTimout="20m" to automatically remove connections that are old enough to have already been terminated by the firewall. However, this would also terminate connections that have been recently used (which prevents the firewall from breaking them).
Am I missing something obvious here?
Thanks!
In this scenario I would reccommend using the maxIdleTime, which you are already using, but reduce your minPoolSize to 0 (or remove it, since the default value is 0).
Per the maxIdleTime doc:
maxIdleTime: Amount of time after which an unused or idle connection can be discarded during pool maintenance, if doing so does not reduce the pool below the minimum size.
Since you have your minPoolSize=3, the pool maintenence won't kick in if there are only 3 bad connections in the pool for example, because the maintenance thread won't won't take the pool size below the minimum according the the doc. So setting the minPoolSize=0 should allow the maxIdleTime to clean up all of the bad connections like you would expect in this scenario.
So here is the final configuration that I would suggest for you:
<dataSource jndiName="jdbc/ora" type="javax.sql.DataSource">
<properties.oracle user="example" password="{xor}AbCdEfGh123="
URL="jdbc:oracle:thin:#example.com:1521:mydb"/>
<connectionManager maxPoolSize="10" maxIdleTime="18m"/>
<jdbcDriver id="oracle-driver" libraryRef="oracle-libs"/>
</dataSource>
The value of maxIdleTime assumes that your firewall kills the connections after 20 mins, and to trigger cleanup after 18 mins in order to give the cleanup thread a 2 minute window to clean up the soon-to-be-bad connections.
it's an old question but it should be usefull to someone else :
you can use "validationTimeout" property of "dataSource". According to the documentation "when specified, pooled connections are validated before being reused from the connection pool.".
This will not close the connections as soon as they will be cut by the firewall but this will prevent the application to crash beacause of a stale connection.
You can then combine this with purgePolicy="ValidateAllConnections" to revalidate all connections as soon as one is detected as stale.
Reference : https://openliberty.io/docs/21.0.0.1/reference/config/dataSource.html#dataSource

Particular ServiceControl not purging messages

We're having an issue where our NServiceBus ServiceControl instance is not purging messages from it's RavenDB as per the supposed expiration policy.
We have set the following key in the ServiceConfig.exe.Config file, which should expire messages after 1 hour, but I can still see messages from yesterday using ServiceInsight and the RavenDB has increased considerably in size.
<add key="ServiceControl/HoursToKeepMessagesBeforeExpiring" value="1" />
We need to get the automatic purging of messages working before our system goes into production, so any assistance is appreciated.
As discussed offline with #starskythehutch the issue is that 1 as the HoursToKeepMessagesBeforeExpiring is not a supported value, the minimum value is 24. Setting one will case ServiceControl to revert back to its default value that is 720 causing no purge for a long time.
We are currently improving the way ServiceControl enforces the above behavior in order to allow the use to better understand what is going on.

How to set the timeout on a NHibernate transaction

I need a lot of DB processing done in a single transaction, including some processing using NHibernate.
To make everything work in the same transaction, I'm using NHibernate's Session to start it, and enlist the the commands for the other work in it.
Everything goes OK until I commit. At that time I get a transaction timeout.
How can I set the NHibernate transaction timeout value sufficiently high?
We use FluentNHibernate.
After some trial and lots of error I found this:
It appears that NHibernate takes the "TransactionManager.DefaultTimeout" value.
It can be set via
<system.transactions>
<defaultSettings timeout="01:00:00" />
</system.transactions>
in the app/web config
If it is set to a value higher than TransactionManager.MaximumTimeout the transaction timeout will default to that.
If you need longer you can lengthen that time by updating the "machine.config" for your .Net framework version with:
<system.transactions>
<machineSettings maxTimeout="01:00:00" />
</system.transactions>