How to find memory leak/consumption using JProfiler in offline mode on Linux production machine?
Thanks.
Memory leaks are analyzed in the heap walker. Regardless of offline mode or not, you have to save a heap dump at some point. In offline mode this is done with a "Trigger heap dump" trigger action.
You can also use the "jpdump" command line utility to get an HPROF heap dump from a JVM where the JProfiler agent is not loaded at all.
Related
Besides Heap memory, can JProfiler perform native memory profiling?
My java application is causing linux cgroup limit exceeded issue on production and would like to run profiling during development or performance test cycle.
No, JProfiler only includes memory profiling for the Java heap.
No information found for this. Is there any way to start weblogic in profiling mode? Or, maybe, it's activated by default?
Profiling action can be initiated 2 ways
1) Pass the profiling action while you execute the startWeblogic.sh. Following parameters can be appended
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=8010
-Dcom.sun.management.jmxremote.local.only=false
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
2) The same parameters can be appended from startup tab in weblogic server
Here's what i got. Mainly focused on remote profiling using NetBeans
generated remote profiling package for target OS and JVM architecture (32 or 64 bit) (Netbeans -> Profile -> Attach Profiler -> Change link, select OS and Java platform -> click "Create a remote profiling package" link)
copy this package to target machine
execute calibration script (calibrate.bat / calibrate.sh (chmod +x))
add specific argument to java_otps with path to this package
-agentpath:PathToProfilerPackage\lib\deployed\jdk16\windows-amd64\profilerinterface.dll=PathToProfilerPackage\lib,5140
restart weblogic. Startup will be interrupted until remote profiler connected
connect to a server using NetBeans profiler. Weblogic startup will continue.
However, i still can't download heap dump (which is available when attaching to local java.exe Weblogic process) but that's something.
I was trying to profile my spark application (that uses the G1 GC) using jprofiler. I came across their website where they mention that jprofiler remote profiling works reliably only with the standard garbage collector:
http://resources.ej-technologies.com/jprofiler/help/doc/index.html
(Under section Probe Settings/Starting Remote Sessions)
"Please note that the profiling interface JVMTI only runs reliably with the standard garbage collector. If you have VM parameters on your command line that change the garbage collector type such as
-Xincgc
-XX:+UseParallelGC
-XX:+UseConcMarkSweepGC
-XX:+UseParNewGC
please make sure to remove them. It might be a good idea to remove all -XX options if you have problems with profiling."
Is this true for the latest version of jprofiler as well (9.0) ? Does this affect cpu profiling as well ?
I am able to do a memory profiling with visualVM, just wondering why this limitation (if at all) with jprofiler.
It's not a limitation, it's just advice. Some of the alternative GCs are not well tested with the JVMTI (the profiling interface of the JVM). G1 GC will become the standard GC, so there the situation is different.
We have a Java EE application that runs in JBoss 7.1.1, and, we must run it in VirtualMachines (such as VMWare ESXI).
The thing is, when we run our app in the VM, the performance is dropped by 50% approx.
Seems like the GC goes crazy... as far I can tell, when GC runs, it take much more longer
than normal to "end", and block the application meanwhile.
Have anyone else had a experience like that? Any tips, tunning or a light that I can follow?
Thanks in advance.
EDIT
JVM has Xmx and Xms = 1Gb
VM has 4Gb RAM
Ubuntu Server 64
oracle JVM 64
I would say that before moving your app to VM with configuration you posted it was running on 32bit system and 32bit jvm and using same jvm parameters.
Trick is that you moved to 64bit with 64bit java but still assigned same amount of heap size for your application, what has happened in reality is that you app now has half the memory available that it used to have.
Every object on 64bit jvm is twice the size of the one of 32bit jvm.
Given configuration you have I would suggest few solutions:
increase heap size to 2G
or use compressed oops
or install 32bit jvm
Given that your application does not have more than 1.3G assigned it think best performance would be achieved by installing 32bit jvm and running with -Xms1300m -Xmx1300m.
You can go even step futher and have 32bit VM with 32bit linux installation.
64bit jvm is only useful if you need more than 1.3G of heap otherwise it just adds too much overhead.
Also you can run jvm with
-verbose:gc -XX:+PrintGCDetails
that will show you what is happening with GC, this can further help you tune your jvm.
I want to run command line "purge" programmatically. I know that I can do a shell exec and call "purge" but my problem is that purge is not included in Mac OS 10.6 and below and will be installed if you install Developer Tools.
Wondering how I can ship purge via my application and/or install that if is not there.
More info:
Platform : MAC OS X
IDE: XCode 4.6
Lang: Obj-C
Don't.
The purpose of purge is described in the manual thus:
Purge can be used to approximate initial boot conditions with a cold disk
buffer cache for performance analysis.
The only effect that it has on an end-user's system is to make the system perform worse while the cache is repopulated. It should NOT be used as part of "memory cleaner" tools, as the only effects it has are negative. (Indeed, these tools should not be used.) If the system actually needs memory, disk caches will be purged as necessary.