We have architecture where user access the dashboard in two step:
Create User session (iServer Session) with web Services (Server to Server Call)
Fetching the docuemnt with appending same user session from client browser.
Other Info
Total of document sizes is about 5MB
JVM Heap Size 8GB
Issue:
When we are starting the load test with 10 users, the jvm heap size increases and it reaches to max in no time with new load injection.
Where there is logout or session time out, the user session times out but the memory in the heap remained same.
Thereby, performance of dashboards decreases and getting time out once heap reaches max.
Can someone please throw more light, how we can on this.
Related
Expiry policy on off-heap entries is not working as expected. I am seeing a linear increase in off-heap entries and after some time sudden dip in off-heap size. Is only marking of cache entry done after expiry time and collected later in bulk? I am not finding any documentation regarding the same.
We need to enable eager TTL in the cache configuration. It will create a thread in the background and removes the expired entries.
There are two modes of expiry policy in Ignite: eager and non-eager: https://www.gridgain.com/docs/latest/developers-guide/configuring-caches/expiry-policies#eager-ttl
In case of an eager TTL entries are expired periodically by a background thread. When a non-eager TTL is used, then entries are only expired upon access. So, it's possible for expired entries to be stored in off-heap for some time until they get read.
Also keep in mind that JVM can allocate more memory than is immediately required. The amount of memory allocated by the Java process is not directly related to the amount of data stored in Ignite.
I am doing load test on a weblogic server.
When I fire a relative high loading to the server, the active thread count stop at 40-50 and never increase again.
When I fire a relative low loading to the server, the active thread count keeping increase util reach the limit.
I saw the oracle doc said the Throughput increase the active will incrase.
If there is a high loading, how the Throughput can increase. The active will never increase.
I am processing (partitional) one of my tables in Analysis Services but it is failing. This partition is around 6M rows. I tried to process other partition which has 5M rows and it succeeded.
Error message is something like:
The following system error occurred: Insufficient quota to complete the requested service.
The Tabular Model has exceeded the memory limits, when loading the data in memory. You may need to alter the paging settings for the instance.
Zero (0) is the default. No paging is allowed. If memory is
insufficient, processing fails with an out-of-memory error. All
Tabular data is locked in memory
1 enables paging to disk using the operating system page file
(pagefile.sys). Lock only hash dictionaries in memory, and allow
Tabular data to exceed total physical memory
2 enables paging to disk using memory-mapped files. Lock only hash dictionaries in memory, and allow Tabular data to exceed total physical memory. This setting has now been discontinued.
The values are taken from this blog which will give you a good overview
We are trying to build a low latency system using Java.
As part of fine tuning, we do the initial warm up of the application during startup to go above the default compile threshold 10000. We could see that any live processing latency immediately after the warmup looks better. The latency looks good when the system is continuously processing events. But when the system goes back to idle state where there is no events coming for 10 minutes, the latency starts degrading.
I initially thought it could be JVM evicting the generated code from the cache. I have set a higher size for code cache and enabled events of code cache and noticed that the code cache is not even reaching 50% capacity.
Any idea what could be the issue? Basically the initial warm up during start up goes in vain once the system goes into idle mode for a short duration.
The used size of the PS Perm Gen Memory Pool increases with every request.
I think its not because of my webapp, because when I refresh http://www.myurl.de/manager/ (the build in tomcat manager) the used Perm Gen size gets 0,1mb/10 refreshes. When I go to my webapp its 0,2mb each request.
When the Perm Gen max. size is reached I have to restart Tomcat.
How to fix it?
Your application is definitely leaking resources.
I suggest you review your code (opening/closing database connections), or any other Objects that can be released between requests.
Based on the given information I cannot suggest anything further.
Don't forget that you can always increase the JVM's Perm Gen Memory using the command (value in MB):
-XX:MaxPermSize=128m
...but thats only a temporary solution.