Apache Ignite Eviction Policy Max Size in Visor shown as <n/a> - ignite

My question is really simple.
I am using the example XML config for setting up the following Eviction Policy from https://apacheignite.readme.io/docs/evictions#section-first-in-first-out-fifo- .
<bean class="org.apache.ignite.cache.CacheConfiguration">
<property name="name" value="myCache"/>
<!-- Enabling on-heap caching for this distributed cache. -->
<property name="onheapCacheEnabled" value="true"/>
<property name="evictionPolicy">
<!-- FIFO eviction policy. -->
<bean class="org.apache.ignite.cache.eviction.fifo.FifoEvictionPolicy">
<!-- Set the maximum cache size to 1 million (default is 100,000). -->
<property name="maxSize" value="1000000"/>
</bean>
</property>
...
</bean>
How can I verify that my eviction policy has been setup successfully before start seeding data into my cache? I have been using visor and then the config command, I can see that Eviction Policy Enabled is set to on, Eviction Policy set to o.a.i.cache.eviction.fifo.FifoEvictionPolicy but Eviction Policy Max Size is set to <n/a> although it has been configured in the XML. That leads me to think that the Eviction policy max size is not setup correctly. Can someone share some light into this question?

Apache Ignite has management beans, so you can verify your Cache configuration using MBeans. Just run a jconsole from JDK and check Cache configuration. Please see sample screenshot:
Thanks, Alexey

Related

Apache Ignite: Off heap Eviction Vs Cache Eviction policy

I am currently running into Out of memory with my data region and i am trying to understand storage and eviction policy for ignite cache :
Out of memory in data region [name=default, initSize=256.0 MiB, maxSize=8.0 GiB, persistenceEnabled=false]
As i understand a non persistent data region is stored in Off heap memory so efectively "xMX" param has no impact on its size. So for my data region storage my system should have sufficient memory.
However the caches which are being stored in data region are having eviction policy of their own. So i am not very clear, when will data region eviction policy trigger?
Will data region eviction policy trigger if cache eviction policy is not able to create sufficient space?
The cache level eviction policies are for an on-heap cache.
If you want to evict from the off-heap storage, you need to configure at the data region level. For example:
<property name="dataRegionConfigurations">
<list>
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="name" value="40MB_Region_Eviction"/>
<property name="initialSize" value="#{20 * 1024 * 1024}"/>
<property name="maxSize" value="#{40 * 1024 * 1024}"/>
<property name="pageEvictionMode" value="RANDOM_2_LRU"/>
</bean>

How to give Fixed memory in Apache ignite for single ignite instance.(On heap, off-heap)

We have ignite cluster of 3 instances,so How can we give Fixed memory in Apache ignite for every ignite instance.
(OS:Ubuntu 14.05
Ignite verion:2.4)
If you are going to set heap memory size then use next JVM options like
-Xms512m -Xmx512m
Off-Heap memory allows your cache to overcome lengthy JVM Garbage Collection (GC) pauses when working with large heap sizes by caching data outside of main Java Heap space, but still in RAM.
By default, Ignite nodes consume up to 20% of the RAM available locally. Change this value you can as next:
<!-- Redefining maximum memory size for the cluster node usage. -->
<property name="dataStorageConfiguration">
<bean class="org.apache.ignite.configuration.DataStorageConfiguration">
<!-- Redefining the default region's settings -->
<property name="defaultDataRegionConfiguration">
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="name" value="Default_Region"/>
<!-- Setting the size of the default region to 4GB. -->
<property name="maxSize" value="#{4L * 1024 * 1024 * 1024}"/>
</bean>
</property>
</bean>
</property>
On-Heap Caching provides the possibility to use Java heap. You can enable on-heap caching by setting org.apache.ignite.configuration.CacheConfiguration.setOnheapCacheEnabled(...) to true.
You can read more here https://apacheignite.readme.io/docs/memory-configuration
Because the size of heap isn't unlimited you could use eviction policies:
https://apacheignite.readme.io/docs/evictions

Apache Ignite - Node running on remote machine not discovered

Apache Ignite Version is: 2.1.0
I am using TcpDiscoveryVmIpFinder to configure the nodes in an Apache Ignite cluster to setup a compute grid. Below is my configuration which is nothing but the example-default.xml, edited for the IP addresses:
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<!--
Ignite provides several options for automatic discovery that can be used
instead os static IP based discovery. For information on all options refer
to our documentation: http://apacheignite.readme.io/docs/cluster-config
-->
<!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
<!--<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">-->
<!-- <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder"> -->
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<property name="addresses">
<list>
<!-- In distributed environment, replace with actual host IP address. -->
<value>xxx.40.16.yyy:47500..47509</value>
<value>xx.40.16.zzz:47500..47509</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
If I start multiple nodes on individual machine, the nodes on respective machines discover each other and form a cluster. But, the nodes on the remote machines do not discover each other.
Any advise will be helpful...
First of all, make sure that you really use this config file and not a default config. With default configuration, nodes can find each other only on the same machine.
Once you've checked it, you also need to test that it's possible to connect from host 106.40.16.64 to 106.40.16.121(and vice versa) via 47500..47509 ports. It's possible that there is a firewall blocked connections or these ports is simply closed.
For example, it's possible to check it with netcat, run this from 106.40.16.64 host:
nc -z 106.40.16.121 47500

Ignite 2.0 how to swap to hard disk

When I do some test with ignite memory, some problems come to me.
The document said I can set the swap to hard disk enable in cacheconfiguration and set the swap file path in MemoryPolicyConfiguration.
However, the swapenable is missing in ignite 2.0 and setswapfile still exists. So, I wonder whether is swapping to disk still available in ignite 2.0. If so, how can I manage it.
define your memory policy, then inject into your cache. like this:
<!-- Defining a custom memory policy. -->
<property name="memoryPolicies">
<list>
<bean class="org.apache.ignite.configuration.MemoryPolicyConfiguration">
<property name="name" value="Default_Region"/>
<!-- 100 MB memory region with disabled eviction -->
<property name="initialSize" value="#{100 * 1024 * 1024}"/>
<!-- Setting a name of the swapping file. -->
<property name="swapFilePath" value="mindMemoryPolicySwap"/>
</bean>
try it.

[apache ignite]ignite cache data lost when i create the primary and backup cache

I run an example with two ignite cache node in two jvm. each jvm runs a ignite node. the nodes map to the same cache.
ignite-config.xml
<bean class="org.apache.ignite.configuration.IgniteConfiguration">
...
<property name="cacheConfiguration">
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<!-- Set a cache name. -->
<property name="name" value="cacheName"/>
<!-- Set cache mode. -->
<property name="cacheMode" value="PARTITIONED"/>
<!-- Number of backup nodes. -->
<property name="backups" value="1"/>
...
</bean>
</property>
</bean>
test steps:
one of the ignite nodes start first and write 10 pieces of
data(key-value: 1-1,2-2,3-3...10-10).
then the second one start and map to the cache.
then ignite nodes start to rebalancing data
for them. the first node has 4 pieces, the second has 6 pieces.
then i kill the jvm of first cache node.
result: the backup node doesn't own 10 pieces as i expect.why?
I'm not sure why ignitevisorcmd.sh is reporting the keys are lost. I suggest looking directly into the cache by querying it after you kill the node. Or as Valentin Suggests, you can try IgniteCache.size()