When Ignite allocated next memory segment? - ignite

I use embedded apache ignite in spring.
Once I test stress test, TPS is temporarily drop down when appearing below log.
a.i.i.p.impl.PageMemoryNoStoreImpl : Allocted next memory segment [plcName=default, chunkSize=422.2MB
I guess the memory configuration is needed.
so I configure systemCacheMaxSize and systemCacheInitialSize, but not changed
My application is need stable tps guarantee.
How can I fix it?
Thanks

I solved It.
I needed MemoryPolicyConfiguration.
Here are my solutions.
new MemoryConfiguration()
.setMemoryPolicies(new MemoryPolicyConfiguration()
.setInitialSize(initSizeMb * 1024L * 1024)
.setMaxSize(maxSizeMb * 1024L * 1024L))

Related

Ignite offheapUsedSize doesn't get reset after clear cache

I got Ignite offheapUsedSize through igenite.dataRegionMetircs().getOffheapUsedSize(), but after I clear cache, this value doesn't get reset, it just keep increasing as time going, I have tried all methods it still not work.
IgniteCache.clear
IgniteCache.removeAll
IgniteCache.clearStatistics()
IgniteCache.resetQueryMetrics
IgniteCache.resetQueryDetailMetrics
IgniteCache.destroy
offheapUsedSize get reset only after I restart the server.
This is working as design
http://apache-ignite-users.70518.x6.nabble.com/how-to-monitor-off-heap-size-used-td27733.html
This is my way to to get real off heap usage:
DataRegionMetricsSnapshot.getTotalUsedPages() * DataRegionMetricsSnapshot.getPageSize()

Why Apache Ignite Cache.replace-K-V-V api call performing slow?

We are running Ignite cluster with 12 nodes running on Ignite 2.7.0 on openjdk
1.8 at RHEL platform.
Seeing heavy cputime spent with https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteCache.html#replace-K-V-V-
We are witnessing slowness with one of our process and when we tried to drill it
further by profiling the JVM, the main culprit (taking ~78% of total time)
seems to be coming from Ignite cache.repalce(K,V,V) api call.
Out of 77.9 by replace, 39% is taken by GridCacheAdapater.equalVal and 38.5%
by GridCacheAdapter.put
Cache is Partitioned and ATOMIC with readThrough,writeThrough,writeBehindEnabled set to True.
Attaching the profiling snapshot of one node(similar is the profiling result on other nodes), Can someone please check and suggest what
could be the cause OR some known performance issue with this Ignite version related to cache.replace(k,v,v) api ?
JVM Prolfiling Snapshot of one node
I guess that it can be related to next issue:
https://issues.apache.org/jira/browse/IGNITE-5003
The problem there related to the operations for the same key before the previous batch of updates (that contains this key) will be stored in the database.
As I see it should be added to Ignite 2.8.
Update:
I tested putAll operation. From the next two pictures you can see that putAll waiting for GridCacheWriteBehindStore.write (two different threads) that contains updateCache:
public void write(Entry<? extends K, ? extends V> entry) {
try {
if (log.isDebugEnabled())
log.debug(S.toString("Store put",
"key", entry.getKey(), true,
"val", entry.getValue(), true));
updateCache(entry.getKey(), entry, StoreOperation.PUT);
}
And provided issue can affect your put operations (or replace as well).

Why flink container vcore size is always 1

I am running flink on yarn(more precisely in AWS EMR yarn cluster).
I read flink document and source code that by default for each task manager container, flink will request the number of slot per task manager as the number of vcores when request resource from yarn.
And I also confirmed from the source code:
// Resource requirements for worker containers
int taskManagerSlots = taskManagerParameters.numSlots();
int vcores = config.getInteger(ConfigConstants.YARN_VCORES,
Math.max(taskManagerSlots, 1));
Resource capability = Resource.newInstance(containerMemorySizeMB,
vcores);
resourceManagerClient.addContainerRequest(
new AMRMClient.ContainerRequest(capability, null, null,
priority));
When I use -yn 1 -ys 3 to start flink, I assume yarn will allocate 3 vcores for the only task manager container, but when I checked the number of vcores for each container from yarn resource manager web ui, I always see the number of vcores is 1. I also see vcore to be 1 from yarn resource manager logs.
I debugged the flink source code to the line I pasted below, and I saw value of vcores is 3.
This is really confuse me, can anyone help to clarify for me, thanks.
An answer from Kien Truong
Hi,
You have to enable CPU scheduling in YARN, otherwise, it always shows that only 1 CPU is allocated for each container,
regardless of how many Flink try to allocate. So you should add (edit) the following property in capacity-scheduler.xml:
<property>
<name>yarn.scheduler.capacity.resource-calculator</name>
<!-- <value>org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator</value> -->
<value>org.apache.hadoop.yarn.util.resource.DominantResourceCalculator</value>
</property>
ALso, taskManager memory is, for example, 1400MB, but Flink reserves some amount for off-heap memory, so the actual heap size is smaller.
This is controlled by 2 settings:
containerized.heap-cutoff-min: default 600MB
containerized.heap-cutoff-ratio: default 15% of TM's memory
That's why your TM's heap size is limitted to ~800MB (1400 - 600)
#yinhua.
Use the command to start a session:./bin/yarn-session.sh, you need add -s arg.
-s,--slots Number of slots per TaskManager
details:
https://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/deployment/yarn_setup.html
https://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/cli.html#usage
I get the answer finally.
It's because yarn is use "DefaultResourceCalculator" allocation strategy, so only memory is counted for yarn RM, even if flink requested 3 vcores, but yarn simply ignore the cpu core number.

Aerospike cluster not clean available blocks

we use aerospike in our projects and caught strange problem.
We have a 3 node cluster and after some node restarting it stop working.
So, we make test to explain our problem
We make test cluster. 3 node, replication count = 2
Here is our namespace config
namespace test{
replication-factor 2
memory-size 100M
high-water-memory-pct 90
high-water-disk-pct 90
stop-writes-pct 95
single-bin true
default-ttl 0
storage-engine device {
cold-start-empty true
file /tmp/test.dat
write-block-size 1M
}
We write 100Mb test data after that we have that situation
available pct equal about 66% and Disk Usage about 34%
All good :slight_smile:
But we stopped one node. After migration we see that available pct = 49% and disk usage 50%
Return node to cluster and after migration we see that disk usage became previous about 32%, but available pct on old nodes stay 49%
Stop node one more time
available pct = 31%
Repeat one more time we get that situation
available pct = 0%
Our cluster crashed, Clients get AerospikeException: Error Code 8: Server memory error
So how we can clean available pct?
If your defrag-q is empty (and you can see whether it is from grepping the logs) then the issue is likely to be that your namespace is smaller than your post-write-queue. Blocks on the post-write-queue are not eligible for defragmentation and so you would see avail-pct trending down with no defragmentation to reclaim the space. By default the post-write-queue is 256 blocks and so in your case that would equate to 256Mb. If your namespace is smaller than that you will see avail-pct continue to drop until you hit stop-writes. You can reduce the size of the post-write-queue dynamically (i.e. no restart needed) using the following command, here I suggest 8 blocks:
asinfo -v 'set-config:context=namespace;id=<NAMESPACE>;post-write-queue=8'
If you are happy with this value you should amend your aerospike.conf to include it so that it persists after a node restart.

How to dump Permgen?

I wanted to take the dump of the Permgen of a application server.
I do not want to use -XX:+TraceClassLoading -XX:+TraceClassUnloading as i do not want to restart the server, Neither i want to use jconsole.
I there any tool like jmap(used to heap dump didnt find any option for permgen) to get the permgen so that i can supply only the pid.
jmap -permstat <pid>
is going to produce an output like that :
30337 intern Strings occupying 2746200 bytes.
class_loader classes bytes parent_loader alive? type
<bootstrap> 2031 7253392 null live <internal>
0x517474f0 1 1760 null dead sun/reflect/DelegatingClassLoader#0x43f95d38
0x4f83f670 1 1744 0x4ebfb8e8 dead sun/reflect/DelegatingClassLoader#0x43f95d38
[...]
total = 287 10020 35889952 N/A alive=3, dead=284 N/A
This is not a full dump, but doing that is going to allow you to do some investigation.
I am still looking on how to find more information.
It is not possible to 'dump permgen' as it's done for the heap.
In addition to jmap -permstat as others have presented, you can analyze standard heap dump to shed some light on your permanent generation as described in this blog entry: 'The Unknown Generation: Perm'.
Because a heap dump does not really contain a lot of information about perm space, perm problems are difficult to tackle. Recently, I found this great article by Sporar, Sundararajan and Kieviet. The authors shed some light on the permanent generation. Of course, I had to check right away if and how I can use the Eclipse Memory Analyzer to analyze this “unknown” generation. This is what this blog is about.
jmap -permstat <pid>