Valgrind: How to force it to generate heap summary without terminating process? - valgrind

When using Valgrind, I noticed that it only generates the Heap Summary when the process is terminating. Is there a way to force Valgrind to scan the memory and print leak reports when process is still running?

In addition to the VALGRIND_DO_LEAK_CHECK client request, you can also run valgrind with --vgdb=yes to enable embedded gdbserver, and then issue monitor leak_check full reachable any command at the (gdb) prompt.
This doesn't require modifying and rebuilding the target program, and has other advantages: you can set breakpoints and perform leak checks at arbitrary points in the execution, not just the ones where you've put in the client request.

Use the VALGRIND_DO_LEAK_CHECK client request from valgrind/memcheck.h.

Related

Running wrapper file continuously for using JFR to monitor ActiveMQ performance

I have an issue about continuously running Java Flight Recorder to monitor memory usage and other performance statistics of ActiveMQ.
Wrapper configuration file (wrapper.conf) is under this directory with nearside (wrapper, activemq, libwrapper.so) files;
../apache-activemq-5.12.1/bin/linux-x86-64/wrapper.conf
I added the lines below to run JFR;
wrapper.java.additional.13=-XX:+UnlockCommercialFeatures
wrapper.java.additional.14=-XX:+FlightRecorder
wrapper.java.additional.15=-XX:FlightRecorderOptions=defaultrecording=true,disk=true,repository=../jfr/jfrs_%WRAPPER_PID%,settings=profile
wrapper.java.additional.16=-XX:StartFlightRecording=filename=../jfr/jfrs_%WRAPPER_PID%/myrecording.jfr,dumponexit=true,compress=true
When I run wrapper file, expected output 'myrecording.jfr' is generated under specified path in wrapper.conf. But the problem is, I also want it to be happen automatically (without running wrapper file by hand).
What might be the possible solution for that?

Where to check log file for JVM exception?

I receive this message after executing a load test in Jmeter
Uncaught Exception java.lang.OutOfMemoryError: unable to create new native thread. See log file for details.
Where can I see the log file?
In case of JVM shutdown this way the log should be in JMeter's "bin" folder under the name like hs_err_pidXXXXXX.log
The error indicates that the operating system is not able to create a new thread (perhaps you reached some form of limit) so refer your OS documentation to learn how to increase it.
Windows: most likely you are not getting this error on Windows as it has quite high limits
Linux: How to set or change the default soft or hard limit for the number of user's processes?
MacOSX: temporary - the same as for Linux, permanent - different depending on MacOSX version
Just in case double check you're following recommendations from 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure article

How can I restart JVM on OutOfMemoryError _after_ making a heap dump?

I know about the -XX:+HeapDumpOnOutOfMemoryError JVM parameter. I also know about -XX:OnOutOfMemoryError="cmd args;cmd args" and that kill -3 <JVM_PID> will request a heap dump.
Question: How can I make sure that I, on OutOfMemoryError, first make a full heap dump and then force a restart (or kill) after the dump is done? Is my best bet -XX:OnOutOfMemoryError="kill -3 %p;sleep <time-it-takes-to-dump>;kill -9 %p"?
java -XX:+HeapDumpOnOutOfMemoryError -XX:OnOutOfMemoryError="kill -9 %p" TestApp
JVM will dump heap first, and then execute OnOutOfMemoryError commands (proof).
If you just want to shutdown you can use one of the following parameters:
-XX:+ExitOnOutOfMemoryError
-XX:+CrashOnOutOfMemoryError
The VM arguments were added in Java version 8u92, see the release notes.
ExitOnOutOfMemoryError
When you enable this option, the JVM exits on the
first occurrence of an out-of-memory error. It can be used if you
prefer restarting an instance of the JVM rather than handling out of
memory errors.
CrashOnOutOfMemoryError
If this option is enabled, when an
out-of-memory error occurs, the JVM crashes and produces text and
binary crash files.
Enhancement Request: JDK-8138745 (parameter naming is wrong though JDK-8154713, ExitOnOutOfMemoryError instead of ExitOnOutOfMemory)
I bet the runtime sets a specific errorlevel on crash. Check for that return code and rerun the program in that case. You should perhaps put that into a script.
The sun jre allows you to heap dump on oome, perhaps openjdk does too.

Strange apache behaviour when lauching an external binary called by a perl script

I am currently setting up a web service powered by apache and running on CENTOS 6.4.
This service uses perl scripts (cgi-bin) launching in particular external homemade fortran compiled binaries.
Here is the issue: when I boot my server, everything goes well except that one of my binary crashes systematically (with a kernel segfault) when called by my perl scripts.
If I restart manually the httpd service (at the command line: service httpd restart), the issue is totally fixed.
I examined apache/system logs and nothing suspicious can be found.
It appears that the problem occurs only when httpd is launched by /etc/rc[0-6].d startup directives. I tried to change the launch order of http (S85httpd by default) to any other position without success.
To summarize, my web service is only functional (with no external binary crash) when httpd is launched at the command line once the server has fully booted up!
[EDIT] This issue is now resolved:
My fortran binary handles very large arrays and complex functions requiring an unlimited stack size.
Despite that the stack size limit was defined on a system-wide basis (in /etc/security/limits.conf), for any reason it appears that the "apache/perl/fortran binary" ensemble was not aware of that (causing my binary to crash each time it was called).
At the contrary, when I manually restarted apache at the shell prompt, the stacksize limit was correctly passed (.bashrc with 'ulimit -S -s unlimited').
As a workaround, I used BSD::Resource module (http://metacpan.org/pod/BSD::Resource) to define stacksize directly in my perl script by using e.g. setrlimit(RLIMIT_STACK, $softlimit, $hardlimit);
Thus, this new stack size limit is now directly passed from my perl script to my binary.
I've run into similar problems before. Maybe one way to solve this is to put the binary on a 'delayed start', so that it starts after everything else on your system is running. One way to do this is to put an at job in your /etc/rc.local script, to start the binary in X minutes.

valgrind on server process

hi i am new to valgrind. I know how to run valgrind on executable files from command line. But how do you run valgrind on server processes like apache/myqld/traffic server etc ..
I want to run valgrind on traffic server (http://incubator.apache.org/projects/trafficserver.html) to detect some memory leaks taking place in the plugin I have written.
Any suggestions ?
thanks,
pigol
You have to start the server under Valgrind's control. Simply take the server's normal start command, and prepend it with valgrind.
Valgrind will attach to every process your main "server" process spawns. When each thread or process ends, Valgrind will output its analysis, so I'd recommend piping that to a file (not sure if it comes out on stderr or stdout.)
If your usual start command is /usr/local/mysql/bin/mysqld, start the server instead with valgrind /usr/local/mysql/bin/mysqld.
If you usually start the service with a script (like /etc/init.d/mysql start) you'll probably need to look inside the script for the actual command the script executes, and run that instead of the script.
Don't forget to pass the --leak-check=full option to valgrind to get the memory leak report.