H2 database (in-memory mode) shutdown not freeing up memory - jvm

I am using an in memory h2 database with the following configuration:
jdbc.url=jdbc:h2:mem:myDb;DB_CLOSE_DELAY=-1
hibernate.dialect=org.hibernate.dialect.H2Dialect
jdbc.user=sa
jdbc.pass=
I can see that the used memory on the system keeps on decreasing. And as soon as I restart the application, the memory comes back. I got the memory dump but it does not show any running threads that are occupying lot of memory. I am guessing this is because of the h2 database.
I checked the jmap result and it shows the following:
BEFORE:
I tried to use h2 shutdown and manually call the System.gc(). That helps the jmap as shown below:
AFTER:
However, the freed memory never reaches the OS/system. Neither the used memory goes down NOR the free memory increases !!
What am I missing here ? If the memory keeps increasing, my machine will always keep crashing after running for few days ? Cannot keep restarting every 2 days !

Related

Ignite - Out of memory

Enviornment : Ignite-2.8.1, Java 11
I am getting out of memory for my application after few minutes of start. On analyzing heap dump created on OOM, I see millions of instances of class org.apache.internal.processors.continuous.GridContinuousMessage
I do not see any direct references of these from my code.
Please suggest. Attaching snapshot.
You seem to have a Continuous Query running, and it is too slow/hanging and not able to process notifications in time, leading to their pile-up.

Google Compute Engine VM constantly crashes

On the Compute Engine VM in us-west-1b, I run 16 vCPUs near 99% usage. After a few hours, the VM automatically crashes. This is not a one-time incident, and I have to manually restart the VM.
There are a few instances of CPU usage suddenly dropping to around 30%, then bouncing back to 99%.
There are no logs for the VM at the time of the crash. Is there any other way to get the error logs?
How do I prevent VMs from crashing?
CPU usage graph
This could be your process manager saying that your processes are out of resources. You might wanna look into Kernel tuning where you can increase the limits on the number of active processes on your VM/OS and their resources. Or you can try using a bigger machine with more physical resources. In short, your machine is falling short on resources and hence in order to keep the OS up, process manager shuts down the processes. SSH is one of those processes. Once you reset the machine, all comes back to normal.
How process manager/kernel decides to quit a process varies in many ways. It could simply be that a process has consistently stayed up for way long time to consume too many resources. Also, one thing to note is that OS images that you use to create a VM on GCP is custom hardened by Google to make sure that they can limit malicious capabilities of processes running on such machines.
One of the best ways to tackle this is:
increase the resources of your VM
then go back to code and find out if there's something that is leaking in the process or memory
if all fails, then you might wanna do some kernel tuning to make sure your processes have higer priority than other system process. Though this is a bad idea since you could end up creating a zombie VM.

Memory issues with GraphDB 7.0

I am trying to load a dataset to GraphDB 7.0. I wrote a Python script to transform and load the data on Sublime Text 3. The program suddenly stopped working and closed, the computer threatened to restart but didn't, and I lost several hours worth of computing as GraphDB doesn't let me query the inserts. This is the error I get on GraphDB:
The currently selected repository cannot be used for queries due to an error:
org.openrdf.repository.RepositoryException: java.lang.RuntimeException: There is not enough memory for the entity pool to load: 65728645 bytes are required but there are 0 left. Maybe cache-memory/tuple-index-memory is too big.
I set the JVM as follows:
-Xms8g
-Xmx9g
I don't exactly remember what I set as the values for the cache and index memories. How do I resolve this issue?
For the record, the database I need to parse has about 300k records. The program shut shop at about 50k. What do I need to do to resolve this issue?
Open the workbench and check the amount of memory you have given to cache memory.
Xmx should be a value that is enough for
cache-memory + memory-for-queries + entity-pool-hash-memory
sadly the latter cannot be calculated easily because it depends on the number of entities in the repository. You will either have to:
Increase the java memory with a bigger value for Xmx
Decrease the value for cache memory

S3cmd sync returns "killed"

I am trying to sync a few large buckets on amazon S3.
When I run my S3cmd sync --recursive command I get a response saying "killed".
What does this refer to? Is there a limit on the number of files that can be synced in S3?
After reading around it looks like the program has memory consumption issues. In particular this can cause the OOM killer (out of memory killer) to take down the process and prevent the system from getting bogged down. A quick look at dmesg after the process is killed will generally show if this is the case or not.
With that in mind I would ensure you're on the latest release, which notes memory consumption issues being solved in the release notes.
Old question, but I would like to say that, before you try to add more physical memory or increase vm memory, try just adding more swap.
I did this with 4 servers (ubuntu and centos) with low ram (700MB total, only 15MB available) and it is working fine now.

High PF Usage on SQL Server

We are running SQL Server 2005 on 64 bit. The PF Usage reaches close to 25 GB every week. Queries that normally take less than a second to run become very slow during this time. What could be causing this?
After running PerfMon, the two counters, Total Server Memory and Target Server Memory show 20 GB and 29 GB respectively. Processor Queue Length and Disk Queue Length are zero.
Sounds like you don't have enough memory, how much is on the Server? More than your page file?
Also could mean Sql Server has "paged out" meaning Windows decided to swap all the info it stored in memory onto the disk.
Open Perfmon ( Goto a command prompt, and type perfmon ) and add these counters:
SQLServer:Buffer Manager - Buffer cache hit ratio
SQLServer:Buffer Manager - Page life expectancy
SQLServer:Memory Manager - Memory Grants Pending
If Buffer Cache Hit Ratio is < 95% it means Sql is using the disk instead of memory a lot you need more memory.
If your page life expectancy is < 500 it mean SqlServer is not keeping results cached in memory, you need more memory.
If you have a lot of Memory Grants Pending, you need more memory.
There are also two stats which let you know how much memory SqlServer wants and how much its actually using. They are called something like "Total Memory Used" and "Total Memory Requested". If Requested > Used, guess what, you need more memory.
There are also some dmv's you can use to determine if your queries are being held up while waiting for memory to free up. I think its sys_dmv.os_wait_stats, something like that.
I community wikied this so a real dba can come in here and clean this up. Don't know the stats off the top of my head.
This is a great read on how to use DMV's to determine memory consumption on sql: http://technet.microsoft.com/en-us/library/cc966540.aspx - look for the 'memory bottlenecks' section.
One big thing is to determine if the memory pressure is internal to SQL (usually the case) or external (rare, but possible). For example, perhaps nothing is wrong with your server, but a driver installed by your antivirus program is leaking memory.