I'm working on the following setup:
OS CentOS release 6.4 (Final)
Memory 1024MB
CPU 1 × 2.4 Ghz # 80%
Glassfish GlassFish Server Open Source Edition 3.1.2.2 (build 5)
I'm aware that this is only a small setup with limited memory, though it should suffice.
Here's my problem (I know, there's written a lot about it already):
After some usage my memory seems to clog up. This leads to a hung Glassfish, meminfo show something like:
MemTotal: 1030772 kB
MemFree: 158488 kB
Buffers: 3204 kB
Cached: 16340 kB
SwapCached: 7100 kB
Active: 413424 kB
Inactive: 410252 kB
top shows:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2380 root 20 0 860m 658m 6028 S 99.8 65.4 170:58.30 java
Conclusion
My Glassfish server tends to use all resources after a while. I'm not sure as too why the CPU is clogged, though I'm suspecting this has something to do with Garbage Collection.
My question is, how can I prevent this from happening? Should I configure my GC, if so, how?
1024MB isn't trivial. I don't know what your app is doing, but it's a respectable start.
Since Java uses a generational memory model, I'd recommend getting a dynamic picture of all the generations: perm gen, eden, etc.
I like Visual VM, with all the plug-ins installed. It lets me see memory, threads, CPU, objects in real time. Try it and see if it'll help you. More information is what you need.
Related
I'm running my unit testing code for neo4j.
My environment:
Ubuntu 20.04LTS server
1Gb Memory
1CPU
Here is what is displayed in the console:
====================================== test session starts ======================================
platform linux -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0
rootdir: ~/morsvq, configfile: pytest.ini
plugins: mock-3.8.2
collected 2 items
---------------------------------------- live log setup -----------------------------------------
INFO testcontainers.core.container:container.py:52 Pulling image neo4j:latest
INFO testcontainers.core.container:container.py:63 Container started: ad7963ed01
INFO testcontainers.core.waiting_utils:waiting_utils.py:46 Waiting to be ready...
INFO testcontainers.core.waiting_utils:waiting_utils.py:46 Waiting to be ready...
ERROR neo4j:__init__.py:571 Failed to read from defunct connection IPv4Address(('localhost', 49153)) (IPv4Address(('127.0.0.1', 49153)))
The same code runs successfully on a faster virtual machine with 8Gb Memory. So the code itself shouldn't be faulty. My suspision is that there is something to do with my configuration, so that it now consumes to much memory?
I've checked the official websites' documentation, but it doesn't mention the memory problem. I wonder if someone has encountered similar problem? How to fix this?
Disclaimer: I am a maintainer of tc-java, so I have only some basic experience with tc-python. However, some facts and constraints are universal across Testcontainers language implementations.
As you already wrote, the code runs fine on a more powerful machine, while it fails on an extremely limited machine. 1GB of RAM is not much, I would expect it is generally not enough to successfully start a Neo4j Docker container without memory swapping. Swapping would make the startup and interactions very slow, hence the startup timeout triggers.
For further debugging, you can run the Neo4j container directly using Docker CLI on your environment and see how it behaves.
I have a java program running in centos Box.
My -Xmx and -Xms set to 4000 Mb.
The program works fine.
But when i do free -m , the used memory is showing as 506 MB. As per my understanding , XMS memory should be reserved for JVM.Why does free command not showing the java used memory ?
I have also done jstat -gccapacity $(pidof java) and there NGCMN and NGCMX updated and have the same value ?
Any support would be helpful.
I'm running my program as java -Xms41000m -Xmx42000m -jar
Even when -Xmx and -Xms set to the same value, the space reserved for Java Heap is not immediately allocated in RAM.
Operating System typically allocates physical memory lazily, only on the first access to a virtual page. So, while unused part of Java Heap is not touched, it won't really consume memory.
You may use -XX:+AlwaysPreTouch option to forcibly touch all heap pages on JVM start.
I am getting the error message such as stack overflow, heap memory error and similar messages after trying to use TestNG. And after installing TestNg the Eclipse feels heavy and became very slow to respond. and throwing this error message.
An error of such kind Stackoverflow and heap memory error occurs because of physical lack of resources such as lower Ram or slower processor. So the only solution to this was to allocate more memory to eclipse IDE. you can allocate more memory to eclipse by finding the eclipse.ini file in your directory where you have installed it. After finding the file, the file should be in notepad. open the file in notepad and edit the memory allocated. there are two things you need to change. Xms and XMX, which is minimum and max memory. I made mine from 256m to 512m for XMS and from 1024m to 2048M. But make sure you allocate only the memory which is spare. otherwise your PC might crash. Hope this helps.
I'm developing an Intellij-Plugin. The Plugin needs to have 2GB of HeapMemory (... yes it really needs to ;) ). I found out how I can increase the Memory of the Intellij IDEA VM bis editing the idea64.exe.vmoptions-file like this:
-Xms128m
-Xmx2048m
-XX:MaxPermSize=350m
-XX:ReservedCodeCacheSize=240m
-XX:+UseCodeCacheFlushing
-XX:+UseCompressedOops
If i enable the memory indicator, i can see that it worked.
But if i run/debug my plugin out of Intellij, the "sandbox"-Intellij has only 1GB of RAM.
It throws the flowing warning:
High memory usage (free 101 of 914 MB) while dumping threads
How can i increase the RAM of the sandbox-Plugin?
Setting VM Options in run configuration doesn't work for me.
I found more proper way to do it from this. Add the following to your build.gradle:
runIde {
jvmArgs '-Xmx2G'
}
My test execution shows "gc memory overhead exceeded" exception in linux cent os 7. I changed jmeter.bat's heap max size 6g and min size as 512m. I am not used any listeners, preprocessor, http header manager. Used regular expression extractor for 2 samplers and constant timer as common. I run my test in terminal and store result in jtl file. I run it for 250 users, rampup period as 1 and scheduler as 5400 seconds. But still issue persist..
System configuration:
Ram 8 GB
CPU octa core 3.12 GHz
Swap memory 16 GB
You say that you changed jmeter.bat, but the problem is on Linux, which doesn't use jmeter.bat. Unless it's a typo, try to change jmeter or jmeter.sh (whichever one you use to invoke JMeter).
Generally I would not recommend more than 2GB for moderate use, and 4GB for heavy use. For instance my settings are:
HEAP="-Xms4096m -Xmx4096m"
and I can run up to 300 concurrent users with a lot of samplers/heavy scripting even in GUI mode. Setting larger heap may cause larger pauses on GC, which can cause the exception you are getting.
After you start JMeter, run the following command to make sure the memory settings are indeed as you expect them to be:
ps -ef | grep JMeter
I actually changed Xmx in jmeter.bat file instead of jmeter.sh file since i used linux for this test. Jmeter.bat is supported in windows os and jmeter.sh is supported for Linux os. So that the above mentioned error occurred. Once I changed it in jmeter.sh file it works perfectly.