Besides Heap memory, can JProfiler perform native memory profiling?
My java application is causing linux cgroup limit exceeded issue on production and would like to run profiling during development or performance test cycle.
No, JProfiler only includes memory profiling for the Java heap.
Related
I was trying to build with bubblewrap and I couldn't find an answer anywhere. it says that outofmemory error
cli ERROR Command failed: gradlew.bat bundleRelease --stacktrace
FAILURE: Build failed with an exception.
* What went wrong:
unable to create native thread: possibly out of memory or process/resource limits reached
* Exception is:
java.lang.OutOfMemoryError: unable to create native thread: possibly out of memory or process/resource limits reached
it says it's out of memory and to allocate more I need to run the java command my self which I can't is there anything I can do here?
As described in this issue: https://github.com/GoogleChromeLabs/bubblewrap/issues/611
Unfortunately, this is an issue with the JVM on Windows, and there isn't much that can be done in Bubblewrap.
It seems the JVM requires contiguous memory addresses when allocating memory. Even though the machine may have enough memory, and enough memory free, the JVM can fail to allocate if the memory is fragmented. There are more details in this StackOverflow question: Java maximum memory on Windows XP
The -Xmx1536 parameter is the default used by Android Studio when creating Android projects. Removing -Xmx1536 worked in this case, but is unlikely to work in all cases for 2 reasons:
If Gradle actually needs that amount of memory, it will still be unable to allocate it and the build will fail (at a later time).
It may still be impossible to allocate smaller chunks of memory.
Rebooting Windows is a solution known to help in these cases too.
How to find memory leak/consumption using JProfiler in offline mode on Linux production machine?
Thanks.
Memory leaks are analyzed in the heap walker. Regardless of offline mode or not, you have to save a heap dump at some point. In offline mode this is done with a "Trigger heap dump" trigger action.
You can also use the "jpdump" command line utility to get an HPROF heap dump from a JVM where the JProfiler agent is not loaded at all.
I was trying to profile my spark application (that uses the G1 GC) using jprofiler. I came across their website where they mention that jprofiler remote profiling works reliably only with the standard garbage collector:
http://resources.ej-technologies.com/jprofiler/help/doc/index.html
(Under section Probe Settings/Starting Remote Sessions)
"Please note that the profiling interface JVMTI only runs reliably with the standard garbage collector. If you have VM parameters on your command line that change the garbage collector type such as
-Xincgc
-XX:+UseParallelGC
-XX:+UseConcMarkSweepGC
-XX:+UseParNewGC
please make sure to remove them. It might be a good idea to remove all -XX options if you have problems with profiling."
Is this true for the latest version of jprofiler as well (9.0) ? Does this affect cpu profiling as well ?
I am able to do a memory profiling with visualVM, just wondering why this limitation (if at all) with jprofiler.
It's not a limitation, it's just advice. Some of the alternative GCs are not well tested with the JVMTI (the profiling interface of the JVM). G1 GC will become the standard GC, so there the situation is different.
We have a Java EE application that runs in JBoss 7.1.1, and, we must run it in VirtualMachines (such as VMWare ESXI).
The thing is, when we run our app in the VM, the performance is dropped by 50% approx.
Seems like the GC goes crazy... as far I can tell, when GC runs, it take much more longer
than normal to "end", and block the application meanwhile.
Have anyone else had a experience like that? Any tips, tunning or a light that I can follow?
Thanks in advance.
EDIT
JVM has Xmx and Xms = 1Gb
VM has 4Gb RAM
Ubuntu Server 64
oracle JVM 64
I would say that before moving your app to VM with configuration you posted it was running on 32bit system and 32bit jvm and using same jvm parameters.
Trick is that you moved to 64bit with 64bit java but still assigned same amount of heap size for your application, what has happened in reality is that you app now has half the memory available that it used to have.
Every object on 64bit jvm is twice the size of the one of 32bit jvm.
Given configuration you have I would suggest few solutions:
increase heap size to 2G
or use compressed oops
or install 32bit jvm
Given that your application does not have more than 1.3G assigned it think best performance would be achieved by installing 32bit jvm and running with -Xms1300m -Xmx1300m.
You can go even step futher and have 32bit VM with 32bit linux installation.
64bit jvm is only useful if you need more than 1.3G of heap otherwise it just adds too much overhead.
Also you can run jvm with
-verbose:gc -XX:+PrintGCDetails
that will show you what is happening with GC, this can further help you tune your jvm.
I'm using this Java version:
java version "1.6.0_24"
OpenJDK Runtime Environment (IcedTea6 1.11.3) (suse-9.1-x86_64)
OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode)
When I start a java program, e.g.
java TestApp
by default, will the JVM run in parallel ?
If so, which parts run in parallel ?
I am interested in this, because I found if I use taskset -c 0 java TestApp to bind TestApp running on processor 0, the first running time is much slower than java TestApp. Does this imply something?
There are a number of single threaded tasks which have a thread of their own.
the main thread which runs you program
the background byte code to native compiler
the finalizer thread (to call finalize() on objects)
the GC thread pool
Your code will only use as many threads as you create (plus "main" which created for you)
The JVM has native threads and no Global Lock, if that's what you're asking.
The first running time is probably largely JITing the bytecode to machine code. I would suspect very strongly that process is optimized for parallel scenarios.