When I do some test with ignite memory, some problems come to me.
The document said I can set the swap to hard disk enable in cacheconfiguration and set the swap file path in MemoryPolicyConfiguration.
However, the swapenable is missing in ignite 2.0 and setswapfile still exists. So, I wonder whether is swapping to disk still available in ignite 2.0. If so, how can I manage it.
define your memory policy, then inject into your cache. like this:
<!-- Defining a custom memory policy. -->
<property name="memoryPolicies">
<list>
<bean class="org.apache.ignite.configuration.MemoryPolicyConfiguration">
<property name="name" value="Default_Region"/>
<!-- 100 MB memory region with disabled eviction -->
<property name="initialSize" value="#{100 * 1024 * 1024}"/>
<!-- Setting a name of the swapping file. -->
<property name="swapFilePath" value="mindMemoryPolicySwap"/>
</bean>
try it.
Related
We are using Gridgain Version: 8.8.10
JDK Version: 1.8
We have Ignite cluster with 3 nodes in Azure Kubernetes. We have enabled native persistence. Some of our Ignite pods are going into the CrashLoopBackOff with the below exception
[07:45:45,477][WARNING][main][FileWriteAheadLogManager] Content of WAL working directory needs rearrangement, some WAL segments will be moved to archive: /gridgain/walarchive/node00-71fcf5d3-faf7-4d2b-abae-bd0621bb12a1. Segments from 0000000000000001.wal to 0000000000000008.wal will be moved, total number of files: 8. This operation may take some time.
[07:45:45,480][SEVERE][main][IgniteKernal] Exception during start processors, node will be stopped and close connections
class org.apache.ignite.IgniteCheckedException: Failed to start processor: GridProcessorAdapter []
at org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1938)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1159)
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1787)
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1711)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1141)
at org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1059)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:945)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:844)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:714)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:683)
at org.apache.ignite.Ignition.start(Ignition.java:344)
at org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:290)
Caused by: class org.apache.ignite.internal.processors.cache.persistence.StorageException: Failed to move WAL segment [src=/gridgain/wal/node00-71fcf5d3-faf7-4d2b-abae-bd0621bb12a1/0000000000000001.wal, dst=/gridgain/walarchive/node00-71fcf5d3-faf7-4d2b-abae-bd0621bb12a1/0000000000000001.wal]
at org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager.moveSegmentsToArchive(FileWriteAheadLogManager.java:3326)
at org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager.prepareAndCheckWalFiles(FileWriteAheadLogManager.java:1542)
at org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager.start0(FileWriteAheadLogManager.java:494)
at org.apache.ignite.internal.processors.cache.GridCacheSharedManagerAdapter.start(GridCacheSharedManagerAdapter.java:60)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.start(GridCacheProcessor.java:605)
at org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1935)
... 11 more
Caused by: java.nio.file.FileAlreadyExistsException: /gridgain/walarchive/node00-71fcf5d3-faf7-4d2b-abae-bd0621bb12a1/0000000000000001.wal
at java.base/sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:450)
at java.base/sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:267)
at java.base/java.nio.file.Files.move(Files.java:1422)
at org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager.moveSegmentsToArchive(FileWriteAheadLogManager.java:3307)
... 16 more
[07:45:45,482][SEVERE][main][IgniteKernal] Got exception while starting (will rollback startup routine).
It seems like the during WAL Archival it is created file with the same name and it is not able to override the file. Is the any specific configuration during WAL Archival which we are missing?
<property name="dataStorageConfiguration">
<bean class="org.apache.ignite.configuration.DataStorageConfiguration">
<!-- set the size of wal segments to 128MB -->
<property name="walSegmentSize" value="#{128 * 1024 * 1024}"/>
<property name="writeThrottlingEnabled" value="true"/>
<!-- Set the page size to 8 KB -->
<property name="pageSize" value="#{8 * 1024}"/>
<property name="defaultDataRegionConfiguration">
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="name" value="Default_Region"/>
<!-- Memory region of 20 MB initial size. -->
<property name="initialSize" value="#{20 * 1024 * 1024}"/>
<!-- Memory region of 8 GB max size. -->
<property name="maxSize" value="#{8L * 1024 * 1024 * 1024}"/>
<!-- Enabling eviction for this memory region. -->
<property name="pageEvictionMode" value="RANDOM_2_LRU"/>
<property name="persistenceEnabled" value="true"/>
<!-- Increasing the buffer size to 1 GB. -->
<property name="checkpointPageBufferSize" value="#{1024L * 1024 * 1024}"/>
</bean>
</property>
<property name="walPath" value="/gridgain/wal"/>
<property name="walArchivePath" value="/gridgain/walarchive"/>
</bean>
</property>
Anyone has faced a similar issue with Ignite Kubernetes Cluster.
We are observing this in GKE. In AKS it works fine. We are using the Apache Ignite Operator.
https://ignite.apache.org/docs/latest/installation/kubernetes/gke-deployment
With Ignite Native Persistence enabled, is there a way to know if the query result is being fetched from cache or disk?
I am using Apache Ignite 2.7.5 with 2 nodes running in PARTITIONED mode with the following configuration at each node.
<bean class="org.apache.ignite.configuration.DataStorageConfiguration">
<!-- Redefining the default region's settings -->
<property name="pageSize" value="#{4 * 1024}"/>
<!--<property name="writeThrottlingEnabled" value="true"/>-->
<property name="defaultDataRegionConfiguration">
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="persistenceEnabled" value="true"/>
<property name="initialSize" value="#{105L * 1024 * 1024 * 1024}"/>
<property name="name" value="Default_Region"/>
<!--Setting the size of the default region to 4GB. -->
<property name="maxSize" value="#{120L * 1024 * 1024 * 1024}"/>
<property name="checkpointPageBufferSize"
value="#{4096L * 1024 * 1024}"/>
<!--<property name="pageEvictionMode" value="RANDOM_2_LRU"/>-->
</bean>
</property>
</bean>
All data is stored in so-called pages located in off-heap memory, it could be either a RAM or a disk, though, for the latter Ignite needs to load pages to the off-heap first, and it doesn't perform reads from a disk directly. On-heap memory is required for data processing, like merging data sets for an SQL query, processing communication requests, and so on.
There is no solid way of detecting if a piece of required data was already preloaded into RAM, though there are some metrics that could help you to see what's happening to the cluster in general. I.e. how often page eviction happens and so on.
You might want to check the following metrics for a data region.
These three give an estimate of the data size loaded to a data region:
TotalAllocatedPages
PagesFillFactor
EmptyDataPages
When persistence is enabled, these provide information on how intensively we use disk for reads (smaller is better):
PagesReplaceRate
PagesRead
PagesReplaced
Some implementation details that might be useful:
Ignite Durable Memory, Ignite Persistent Store - under the hood
We have ignite cluster of 3 instances,so How can we give Fixed memory in Apache ignite for every ignite instance.
(OS:Ubuntu 14.05
Ignite verion:2.4)
If you are going to set heap memory size then use next JVM options like
-Xms512m -Xmx512m
Off-Heap memory allows your cache to overcome lengthy JVM Garbage Collection (GC) pauses when working with large heap sizes by caching data outside of main Java Heap space, but still in RAM.
By default, Ignite nodes consume up to 20% of the RAM available locally. Change this value you can as next:
<!-- Redefining maximum memory size for the cluster node usage. -->
<property name="dataStorageConfiguration">
<bean class="org.apache.ignite.configuration.DataStorageConfiguration">
<!-- Redefining the default region's settings -->
<property name="defaultDataRegionConfiguration">
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="name" value="Default_Region"/>
<!-- Setting the size of the default region to 4GB. -->
<property name="maxSize" value="#{4L * 1024 * 1024 * 1024}"/>
</bean>
</property>
</bean>
</property>
On-Heap Caching provides the possibility to use Java heap. You can enable on-heap caching by setting org.apache.ignite.configuration.CacheConfiguration.setOnheapCacheEnabled(...) to true.
You can read more here https://apacheignite.readme.io/docs/memory-configuration
Because the size of heap isn't unlimited you could use eviction policies:
https://apacheignite.readme.io/docs/evictions
My question is really simple.
I am using the example XML config for setting up the following Eviction Policy from https://apacheignite.readme.io/docs/evictions#section-first-in-first-out-fifo- .
<bean class="org.apache.ignite.cache.CacheConfiguration">
<property name="name" value="myCache"/>
<!-- Enabling on-heap caching for this distributed cache. -->
<property name="onheapCacheEnabled" value="true"/>
<property name="evictionPolicy">
<!-- FIFO eviction policy. -->
<bean class="org.apache.ignite.cache.eviction.fifo.FifoEvictionPolicy">
<!-- Set the maximum cache size to 1 million (default is 100,000). -->
<property name="maxSize" value="1000000"/>
</bean>
</property>
...
</bean>
How can I verify that my eviction policy has been setup successfully before start seeding data into my cache? I have been using visor and then the config command, I can see that Eviction Policy Enabled is set to on, Eviction Policy set to o.a.i.cache.eviction.fifo.FifoEvictionPolicy but Eviction Policy Max Size is set to <n/a> although it has been configured in the XML. That leads me to think that the Eviction policy max size is not setup correctly. Can someone share some light into this question?
Apache Ignite has management beans, so you can verify your Cache configuration using MBeans. Just run a jconsole from JDK and check Cache configuration. Please see sample screenshot:
Thanks, Alexey
I run an example with two ignite cache node in two jvm. each jvm runs a ignite node. the nodes map to the same cache.
ignite-config.xml
<bean class="org.apache.ignite.configuration.IgniteConfiguration">
...
<property name="cacheConfiguration">
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<!-- Set a cache name. -->
<property name="name" value="cacheName"/>
<!-- Set cache mode. -->
<property name="cacheMode" value="PARTITIONED"/>
<!-- Number of backup nodes. -->
<property name="backups" value="1"/>
...
</bean>
</property>
</bean>
test steps:
one of the ignite nodes start first and write 10 pieces of
data(key-value: 1-1,2-2,3-3...10-10).
then the second one start and map to the cache.
then ignite nodes start to rebalancing data
for them. the first node has 4 pieces, the second has 6 pieces.
then i kill the jvm of first cache node.
result: the backup node doesn't own 10 pieces as i expect.why?
I'm not sure why ignitevisorcmd.sh is reporting the keys are lost. I suggest looking directly into the cache by querying it after you kill the node. Or as Valentin Suggests, you can try IgniteCache.size()