valgrind on server process - valgrind

hi i am new to valgrind. I know how to run valgrind on executable files from command line. But how do you run valgrind on server processes like apache/myqld/traffic server etc ..
I want to run valgrind on traffic server (http://incubator.apache.org/projects/trafficserver.html) to detect some memory leaks taking place in the plugin I have written.
Any suggestions ?
thanks,
pigol

You have to start the server under Valgrind's control. Simply take the server's normal start command, and prepend it with valgrind.
Valgrind will attach to every process your main "server" process spawns. When each thread or process ends, Valgrind will output its analysis, so I'd recommend piping that to a file (not sure if it comes out on stderr or stdout.)
If your usual start command is /usr/local/mysql/bin/mysqld, start the server instead with valgrind /usr/local/mysql/bin/mysqld.
If you usually start the service with a script (like /etc/init.d/mysql start) you'll probably need to look inside the script for the actual command the script executes, and run that instead of the script.
Don't forget to pass the --leak-check=full option to valgrind to get the memory leak report.

Related

Execute a shell command outside of a sandbox while in a sandbox

I'm using singularity to run python in an environnement deprived of python. I'm also running a mysql instance as explained by the IOWA state university (running an instance of mysql, and closing it when done).
For clarity, I'm using a bash script to open mysql, then do what i have to do (a python script) and close mysql, and it works fine. But Python's only way to stop if an error occured is sys.exit([value]) and this not only stops the python script, but also the bash script that ran it. This makes it impossible for me to manage the errors and close the instance of mysql if the python script exits.
My question is : Is there a way for me to execute a 'singularity instance stop mysql' while being in the python sandbox. Something to tell singularity "hey, this command here must be used on the host !" ?
I keep searching but can't find anything.
I only tried to execute it with subprocess like any other command, but it returned an error message because I don't have this instance inside the python sandbox. I don't even have singularity in this sandbox.
For any clarifications, just ask me, I'm trying to be clear but I'm pretty sure it's not very clear.
Thanks a lot !
Generally speaking, it would be a big security issue if a process could be initiated from inside a container (docker or singularity) but run in the host OS's namespace.
If the bash script is exiting on the python failure, it sounds like you're using set -e or #!/bin/bash -e. This causes the script to abort if any command returns non-zero. It's commonly recommended for safer processing, but can cause problems like this at times. To bypass that for the python step you can modify your script:
# start mysql, do some stuff
set +x # disable abort on non-zero return
python my_script.py
set -x # re-enable abort on non-zero
# shut down mysql, do other stuff

Linux loop script: Cannot allocate memory

I face a huge problem with a script on linux.
I work on an apache2 server and I have to execute a php script every seconds to update the database (yes, every second, I'm sure).
To do that, I created this script :
#!/bin/bash
while [ -f "MONFICHIER" ]
do
php fichier.php >> log.txt
sleep 1
done
exit 0
This script runs for a while and stops and I get this error message : "fork: Cannot allocate memory"
Actually everything works fine and after a while, a plenty of defunct processes are generated and it is because these processes that the memory is full.
About the php file it is a index.php file to the CodeIgniter Framework with parameters for the fonction to call. Finally it updates the database by checking data.
I'm sorry but I can't provide the source code (confidential) but the function is fast (less than a second)
Did any body have this problem?
Thanks!
Either your PHP script takes more than one second to execute and you have a lot of parallel php invocation, or it takes a lot of memory. Either way, nobody can help you with only the source of the shell script.

Valgrind: How to force it to generate heap summary without terminating process?

When using Valgrind, I noticed that it only generates the Heap Summary when the process is terminating. Is there a way to force Valgrind to scan the memory and print leak reports when process is still running?
In addition to the VALGRIND_DO_LEAK_CHECK client request, you can also run valgrind with --vgdb=yes to enable embedded gdbserver, and then issue monitor leak_check full reachable any command at the (gdb) prompt.
This doesn't require modifying and rebuilding the target program, and has other advantages: you can set breakpoints and perform leak checks at arbitrary points in the execution, not just the ones where you've put in the client request.
Use the VALGRIND_DO_LEAK_CHECK client request from valgrind/memcheck.h.

Fork shell script (not &)

I'm accessing a webserver via PHP. I want to update some info in the Apache configs, so I start a shell script that makes the changes. Then I want to stop and restart Apache.
Problem: as soon as I stop Apache, my process stops and my shell script, being a child process, is killed. Apache never restarts. This also happens with Apache restart.
Is there a way to fork an independent, non-child process for the shell script, so I can restart Apache?
Thx,
Mr B
You can use disown:
disown [-ar] [-h] [jobspec ...]
Without options, each jobspec is removed from the table of active jobs. If the `-h' option is given, the job is not removed from the table, but is marked so that SIGHUP is not sent to the job if the shell receives a SIGHUP. If jobspec is not present, and neither the `-a' nor `-r' option is supplied, the current job is used. If no jobspec is supplied, the `-a' option means to remove or mark all jobs; the `-r' option without a jobspec argument restricts operation to running jobs.
./myscript.sh &
disown
./myscript.sh will continue running even if the script that started it dies.
Take a look at nohup, may fit you needs.
let's say you have a script called test.sh
for i in $(seq 100); do
echo $i >> test.temp
sleep 1;
done
if you run nohup ./test.sh & you can kill the shell and the process stay alive.

How do you start running the program over again in gdb with 'target remote'?

When you're doing a usual gdb session on an executable file on the same computer, you can give the run command and it will start the program over again.
When you're running gdb on an embedded system, as with the command target localhost:3210, how do you start the program over again without quitting and restarting your gdb session?
You are looking for Multi-Process Mode for gdbserver and set remote exec-file filename
Unfortunately, I don't know of a way to restart the application and still maintain your session. A workaround is to set the PC back to the entry point of your program. You can do this by either calling:
jump function
or
set $pc=address.
If you munged the arguments to main you may need set them up again.
Edit:
There are a couple of caveats with the above method that could cause problems.
If you are in a multi-threaded program jumping to main will jump the current thread to main, all other threads remain. If the current thread held a lock...then you have some problems.
Memory leaks, if you program flow allocates some stuff during initialization then you just leaked a bunch of memory with the jump.
Open files will still remain open. If you mmap some files or an address, the call will most likely fail.
So, using jump isn't the same thing as restarting the program.
"jump _start" is the usual way.
Presumably you are running gdbserver on the embedded system.
You can ask it to restart your program instead of exiting with target extended-remote
Step-by-step procedure
Remote:
# pwd contains cross-compiled ./myexec
gdbserver --multi :1234
Local:
# pwd also contains the same cross-compiled ./myexec
gdb -ex 'target extended-remote 192.168.0.1:1234' \
-ex 'set remote exec-file ./myexec' \
--args ./myexec arg1 arg2
(gdb) r
[Inferior 1 (process 1234) exited normally]
(gdb) r
[Inferior 1 (process 1235) exited normally]
(gdb) monitor exit
Tested in Ubuntu 14.04.
It is also possible to pass CLI arguments to the program as:
gdbserver --multi :1234 ./myexec arg1 arg2
and the ./myexec part removes the need for set remote exec-file ./myexec, but this has the following annoyances:
undocumented: https://sourceware.org/bugzilla/show_bug.cgi?id=21981
does not show on show args and does not persist across restarts: https://sourceware.org/bugzilla/show_bug.cgi?id=21980
Pass environment variables and change working directory without restart: How to modify the environment variables and working directory of gdbserver --multi without restarting it?
If you are running regular gdb you can type 'run' shortcut 'r' and gdb asks you if you wish to restart the program
For me the method described in 21.2 Sample GDB session startup works great. When I enter monitor reset halt later at the “(gdb)” prompt the target hardware is reset and I can re-start the application with c (= continue).
The load command can be omitted between the runs because there is no need to flash the program again and again.
You can use jump gdb command. For that, you can check your startup script.
My startup script has a symbol.
.section .text.Reset_Handler
.weak Reset_Handler
.type Reset_Handler, %function
Reset_Handler:
ldr r0, =_estack
mov sp, r0 /* set stack pointer */
I wanted to jump to start. That's why I used:
jump Reset_Handler
On EFM32 Happy Gecko none of the suggestions would work for me, so here is what I have learned from the documentation on integrating GDB into the Eclipse environment.
(gdb) mon reset 0
(gdb) continue
(gdb) continue
This puts me in the state that I would have expected when hitting reset from the IDE.