If I compile a 32 bit executable in Mac 10.6, with the -m32 flag, like:
gcc -m32 test.c -o test
running valgrind on "test" fails with the error:
valgrind: ./test: cannot execute binary file
Are there any flags to pass to valgrind to execute this? Is the only option to compile valgrind in 32 bit mode?
Thanks
Blender, the -m32 flag just means to compile the file in 32-bit mode. Mac 10.6 runs 32-bit executables just fine.
I had this problem too, with valgrind built/installed by MacPorts. When I built it myself, however, the problem went away. I can confirm that a default build of valgrind with no extra configure options supports both 32-bit and 64-bit programs on Snow Leopard (used version 3.6.1.)
What version of Valgrind are you having trouble with?
On Linux and MacOS, a single build of Valgrind can automatically detect and do the right thing for both 32 and 64-bit binaries.
Here is what I see on Mac OS X 10.6.7 (10J869):
$ echo "int main() { free(1); return 0; }" | gcc -xc - -g -o a.out
$ echo "int main() { free(1); return 0; }" | gcc -xc - -g -o a.out32 -m32
$ valgrind --version
valgrind-3.7.0.SVN
$ valgrind ./a.out
==46102== Memcheck, a memory error detector
==46102== Copyright (C) 2002-2010, and GNU GPL'd, by Julian Seward et al.
==46102== Using Valgrind-3.7.0.SVN and LibVEX; rerun with -h for copyright info
==46102== Command: ./a.out
==46102==
--46102-- ./a.out:
--46102-- dSYM directory is missing; consider using --dsymutil=yes
==46102== Invalid free() / delete / delete[] / realloc()
==46102== at 0x100010E9F: free (vg_replace_malloc.c:366)
==46102== by 0x100000F26: main (in ./a.out)
==46102== Address 0x1 is not stack'd, malloc'd or (recently) free'd
==46102==
==46102==
==46102== HEAP SUMMARY:
==46102== in use at exit: 88 bytes in 1 blocks
==46102== total heap usage: 1 allocs, 1 frees, 88 bytes allocated
==46102==
==46102== LEAK SUMMARY:
==46102== definitely lost: 0 bytes in 0 blocks
==46102== indirectly lost: 0 bytes in 0 blocks
==46102== possibly lost: 0 bytes in 0 blocks
==46102== still reachable: 0 bytes in 0 blocks
==46102== suppressed: 88 bytes in 1 blocks
==46102==
==46102== For counts of detected and suppressed errors, rerun with: -v
==46102== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
$ valgrind ./a.out32
==46103== Memcheck, a memory error detector
==46103== Copyright (C) 2002-2010, and GNU GPL'd, by Julian Seward et al.
==46103== Using Valgrind-3.7.0.SVN and LibVEX; rerun with -h for copyright info
==46103== Command: ./a.out32
==46103==
--46103-- ./a.out32:
--46103-- dSYM directory is missing; consider using --dsymutil=yes
==46103== Invalid free() / delete / delete[] / realloc()
==46103== at 0xF7D8: free (vg_replace_malloc.c:366)
==46103== by 0x1F7B: main (in ./a.out32)
==46103== Address 0x1 is not stack'd, malloc'd or (recently) free'd
==46103==
==46103==
==46103== HEAP SUMMARY:
==46103== in use at exit: 320 bytes in 7 blocks
==46103== total heap usage: 7 allocs, 1 frees, 320 bytes allocated
==46103==
==46103== LEAK SUMMARY:
==46103== definitely lost: 0 bytes in 0 blocks
==46103== indirectly lost: 0 bytes in 0 blocks
==46103== possibly lost: 0 bytes in 0 blocks
==46103== still reachable: 260 bytes in 6 blocks
==46103== suppressed: 60 bytes in 1 blocks
==46103== Rerun with --leak-check=full to see details of leaked memory
==46103==
==46103== For counts of detected and suppressed errors, rerun with: -v
==46103== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
Related
During start valgrind prints the following and terminates silently. Why does that happen and what does it mean?
==2758== Memcheck, a memory error detector
==2758== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
==2758== Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info
==2758== Command: /usr/local/bin/bp_mat_00
==2758==
==2758== error writing 36 bytes to shared mem /tmp/vgdb-pipe-shared-mem-vgdb-2758-by-root-on-???
This means that valgrind attempts to write to a directory that is already full. See
df -h
However if the directory gets filled at a later stage of valgrind run then valgrind may not print that message, may not terminate, but still may work incorrectly...
Visual Leak Detector observes a memory leak (a minor 40 bytes) in the following code..
...
void simulatememoryleak(){
boost::asio::io_service m_IOService;
boost::asio::serial_port m_SerialPort( m_IOService, "COM21" );;
m_SerialPort.cancel();
m_SerialPort.close();
m_IOService.stop();
m_IOService.reset();
}
..
Can anyone suggest why this is?
I have also posted questions to the VLD and boost communities..
On my linux box it only leaks when there is no permission to open the serial port (in which case it aborts with an exception).
Here's a valgrind of the working run ¹:
$ sudo valgrind --leak-check=full ./test
==21281== Memcheck, a memory error detector
==21281== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al.
==21281== Using Valgrind-3.10.0.SVN and LibVEX; rerun with -h for copyright info
==21281== Command: ./test
==21281==
==21281==
==21281== HEAP SUMMARY:
==21281== in use at exit: 0 bytes in 0 blocks
==21281== total heap usage: 10 allocs, 10 frees, 851 bytes allocated
==21281==
==21281== All heap blocks were freed -- no leaks are possible
==21281==
==21281== For counts of detected and suppressed errors, rerun with: -v
==21281== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
When not running as root I get
==21286== possibly lost: 331 bytes in 4 blocks
==21286== still reachable: 664 bytes in 8 blocks
¹ using /dev/ttyS0 or similar
As part of encog install testing, I tried running
./encog benchmark /gpu:0, which worked fine, but when I tried
./encog benchmark /gpu:1, I got:
encog-core/cuda_eval.cu(286) : getLastCudaError() CUDA error : kernel launch failure : (13) invalid device symbol.
I am on Ubuntu 11.10, I got source code from https://github.com/encog/encog-c,
and the "make ARCH=64 CUDA=1" went without error.
Thanks for any help in solving this problem.
Here's the console list for the benchmark that worked fine:
rick#rick-cuda:~/a01-neuralnet-encog/encog-c-master$ ./encog benchmark /gpu:0
* * Encog C/C++ (64 bit, CUDA) Command Line v1.0 * *
Copyright 2012 by Heaton Research, Released under the Apache License
Build Date: May 4 2013 07:24:00
Processor/Core Count: 32
Basic Data Type: double (64 bits)
GPU: disabled
Input Count: 10
Ideal Count: 1
Records: 10000
Iterations: 100
Performing benchmark...please wait
Benchmark time(seconds): 3.2856
Benchmark time includes only training time.
Encog Finished. Run time 00:00:03.2904
=============================================
Here's the benchmark run that had the problem
rick#rick-cuda:~/a01-neuralnet-encog/encog-c-master$ ./encog benchmark /gpu:1
* * Encog C/C++ (64 bit, CUDA) Command Line v1.0 * *
Copyright 2012 by Heaton Research, Released under the Apache License
Build Date: May 4 2013 07:24:00
Processor/Core Count: 32
Basic Data Type: double (64 bits)
GPU: enabled
Input Count: 10
Ideal Count: 1
Records: 10000
Iterations: 100
Performing benchmark...please wait
encog-core/cuda_eval.cu(286) : getLastCudaError() CUDA error : kernel launch failure : (13) invalid device symbol.
==========================================
Here's what my GPU environment looks like:
rick#rick-cuda:~/a01-neuralnet-encog/encog-c-master$ ./encog cuda
* * Encog C/C++ (64 bit, CUDA) Command Line v1.0 * *
Copyright 2012 by Heaton Research, Released under the Apache License
Build Date: May 4 2013 07:24:00
Processor/Core Count: 32
Basic Data Type: double (64 bits)
GPU: enabled
Device 0: GeForce GTX 690
CUDA Driver Version / Runtime Version 5.0 / 5.0
CUDA Capability Major/Minor version number: 3.0
Total amount of global memory: 2048 MBytes (2147287040 bytes)
( 8) Multiprocessors x (192) CUDA Cores/MP: 1536 CUDA Cores
GPU Clock Speed: 1.02 GHz
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per block: 1024
Maximum sizes of each dimension of a block: 1024 x 1024 x 64
Maximum sizes of each dimension of a grid: 2147483647 x 65535 x 65535
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Device 1: GeForce GTX 690
CUDA Driver Version / Runtime Version 5.0 / 5.0
CUDA Capability Major/Minor version number: 3.0
Total amount of global memory: 2048 MBytes (2147287040 bytes)
( 8) Multiprocessors x (192) CUDA Cores/MP: 1536 CUDA Cores
GPU Clock Speed: 1.02 GHz
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per block: 1024
Maximum sizes of each dimension of a block: 1024 x 1024 x 64
Maximum sizes of each dimension of a grid: 2147483647 x 65535 x 65535
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Device 2: GeForce GTX 690
CUDA Driver Version / Runtime Version 5.0 / 5.0
CUDA Capability Major/Minor version number: 3.0
Total amount of global memory: 2048 MBytes (2147287040 bytes)
( 8) Multiprocessors x (192) CUDA Cores/MP: 1536 CUDA Cores
GPU Clock Speed: 1.02 GHz
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per block: 1024
Maximum sizes of each dimension of a block: 1024 x 1024 x 64
Maximum sizes of each dimension of a grid: 2147483647 x 65535 x 65535
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Device 3: GeForce GTX 690
CUDA Driver Version / Runtime Version 5.0 / 5.0
CUDA Capability Major/Minor version number: 3.0
Total amount of global memory: 2048 MBytes (2147287040 bytes)
( 8) Multiprocessors x (192) CUDA Cores/MP: 1536 CUDA Cores
GPU Clock Speed: 1.02 GHz
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per block: 1024
Maximum sizes of each dimension of a block: 1024 x 1024 x 64
Maximum sizes of each dimension of a grid: 2147483647 x 65535 x 65535
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Device 4: GeForce GTX 690
CUDA Driver Version / Runtime Version 5.0 / 5.0
CUDA Capability Major/Minor version number: 3.0
Total amount of global memory: 2048 MBytes (2147287040 bytes)
( 8) Multiprocessors x (192) CUDA Cores/MP: 1536 CUDA Cores
GPU Clock Speed: 1.02 GHz
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per block: 1024
Maximum sizes of each dimension of a block: 1024 x 1024 x 64
Maximum sizes of each dimension of a grid: 2147483647 x 65535 x 65535
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Device 5: GeForce GTX 690
CUDA Driver Version / Runtime Version 5.0 / 5.0
CUDA Capability Major/Minor version number: 3.0
Total amount of global memory: 2048 MBytes (2147287040 bytes)
( 8) Multiprocessors x (192) CUDA Cores/MP: 1536 CUDA Cores
GPU Clock Speed: 1.02 GHz
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per block: 1024
Maximum sizes of each dimension of a block: 1024 x 1024 x 64
Maximum sizes of each dimension of a grid: 2147483647 x 65535 x 65535
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Device 6: GeForce GTX 690
CUDA Driver Version / Runtime Version 5.0 / 5.0
CUDA Capability Major/Minor version number: 3.0
Total amount of global memory: 2048 MBytes (2147287040 bytes)
( 8) Multiprocessors x (192) CUDA Cores/MP: 1536 CUDA Cores
GPU Clock Speed: 1.02 GHz
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per block: 1024
Maximum sizes of each dimension of a block: 1024 x 1024 x 64
Maximum sizes of each dimension of a grid: 2147483647 x 65535 x 65535
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Device 7: GeForce GTX 690
CUDA Driver Version / Runtime Version 5.0 / 5.0
CUDA Capability Major/Minor version number: 3.0
Total amount of global memory: 2048 MBytes (2147287040 bytes)
( 8) Multiprocessors x (192) CUDA Cores/MP: 1536 CUDA Cores
GPU Clock Speed: 1.02 GHz
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per block: 1024
Maximum sizes of each dimension of a block: 1024 x 1024 x 64
Maximum sizes of each dimension of a grid: 2147483647 x 65535 x 65535
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Performing CUDA test.
Vector Addition
CUDA Vector Add Test was successful.
Encog Finished. Run time 00:00:10.9206
===============================
Here's the output of my "make":
rick#rick-cuda:~/a01-neuralnet-encog/encog-c-master$ make ARCH=64 CUDA=1
mkdir -p ./obj-cmd
gcc -c -o obj-cmd/encog-cmd.o encog-cmd/encog-cmd.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-cmd
gcc -c -o obj-cmd/cuda_test.o encog-cmd/cuda_test.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-cmd
gcc -c -o obj-cmd/node_unix.o encog-cmd/node_unix.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-cmd
/usr/local/cuda/bin/nvcc -o obj-cmd/cuda_vecadd.cu.o -c encog-cmd/cuda_vecadd.cu -I./encog-core/ -m64
mkdir -p ./obj-lib
gcc -c -o obj-lib/activation.o encog-core/activation.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-lib
gcc -c -o obj-lib/errorcalc.o encog-core/errorcalc.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-lib
gcc -c -o obj-lib/network_io.o encog-core/network_io.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-lib
gcc -c -o obj-lib/util.o encog-core/util.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-lib
gcc -c -o obj-lib/util_str.o encog-core/util_str.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-lib
gcc -c -o obj-lib/data.o encog-core/data.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-lib
gcc -c -o obj-lib/errors.o encog-core/errors.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-lib
gcc -c -o obj-lib/network.o encog-core/network.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-lib
gcc -c -o obj-lib/pso.o encog-core/pso.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-lib
gcc -c -o obj-lib/util_file.o encog-core/util_file.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-lib
gcc -c -o obj-lib/vector.o encog-core/vector.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-lib
gcc -c -o obj-lib/encog.o encog-core/encog.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-lib
gcc -c -o obj-lib/nm.o encog-core/nm.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-lib
gcc -c -o obj-lib/object.o encog-core/object.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-lib
gcc -c -o obj-lib/rprop.o encog-core/rprop.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-lib
gcc -c -o obj-lib/hash.o encog-core/hash.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-lib
gcc -c -o obj-lib/train.o encog-core/train.c -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include
mkdir -p ./obj-lib
/usr/local/cuda/bin/nvcc -o obj-lib/encog_cuda.cu.o -c encog-core/encog_cuda.cu -I./encog-core/ -m64
mkdir -p ./obj-lib
/usr/local/cuda/bin/nvcc -o obj-lib/cuda_eval.cu.o -c encog-core/cuda_eval.cu -I./encog-core/ -m64
ptxas /tmp/tmpxft_00001b04_00000000-5_cuda_eval.ptx, line 141; warning : Double is not supported. Demoting to float
mkdir -p ./lib
ar rcs ./lib/encog.a ./obj-lib/activation.o ./obj-lib/errorcalc.o ./obj-lib/network_io.o ./obj-lib/util.o ./obj-lib/util_str.o ./obj-lib/data.o ./obj-lib/errors.o ./obj-lib/network.o ./obj-lib/pso.o ./obj-lib/util_file.o ./obj-lib/vector.o ./obj-lib/encog.o ./obj-lib/nm.o ./obj-lib/object.o ./obj-lib/rprop.o ./obj-lib/hash.o ./obj-lib/train.o ./obj-lib/encog_cuda.cu.o ./obj-lib/cuda_eval.cu.o
gcc -o encog obj-cmd/encog-cmd.o obj-cmd/cuda_test.o obj-cmd/node_unix.o obj-cmd/cuda_vecadd.cu.o lib/encog.a -I./encog-core/ -fopenmp -std=gnu99 -pedantic -O3 -Wall -m64 -DENCOG_CUDA=1 -I/usr/local/cuda/include -lm ./lib/encog.a -L/usr/local/cuda/lib64 -lcudart
rick#rick-cuda:~/a01-neuralnet-encog/encog-c-master$
I tried running on this on my GeForce 580, and no problem. I am on a different platform than you as you are 6 series. I looked up the error on a few places in Google. It looks like perhaps an issue with the way local memory is used, might not work with the 6 series. Might want to submit an issue here:
https://github.com/encog/encog-c/issues
How do you get Valgrind to show exactly where an error occured? I compiled my program (on a Windows machine over a Linux terminal via PuTTy) adding the -g debug option.
When I run Valgrind, I get the Leak and Heap summary, and I definitely have lost memory, but I never get information about where it happens (file name, line). Shouldn't Valgrind be telling me on what line after I allocate memory, it fails to deallocate later?
==15746==
==15746== HEAP SUMMARY:
==15746== in use at exit: 54 bytes in 6 blocks
==15746== total heap usage: 295 allocs, 289 frees, 11,029 bytes allocated
==15746==
==15746== LEAK SUMMARY:
==15746== definitely lost: 12 bytes in 3 blocks
==15746== indirectly lost: 42 bytes in 3 blocks
==15746== possibly lost: 0 bytes in 0 blocks
==15746== still reachable: 0 bytes in 0 blocks
==15746== suppressed: 0 bytes in 0 blocks
==15746== Rerun with --leak-check=full to see details of leaked memory
==15746==
==15746== For counts of detected and suppressed errors, rerun with: -v
==15746== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 15 from 8)
I've repeatedly gotten hosed on this, and couldn't figure out why '--leak-check=full' wasn't working for me, so I thought I'd bump up tune2fs comment.
The most likely problem is that you've (Not ShrimpCrackers, but whoever is reading this post right now) placed --leak-check=full at the end of your command line. Valgrind would like you to post the flag before you enter the actual command line to run your program.
i.e.:
valgrind --leak-check=full ./myprogram
NOT:
valgrind ./myprogram --leak-check=full
Try valgrind --leak-check=full
This normally prints more useful information.
Also add the -O0 flag when compiling so your code doesn't get optimized.
It's not an option related to valgrind. Instead, the code have to be compiled with -g options, in order to preserve the debug symbol.
cc -g main.c
valgrind --trace-children=yes --track-fds=yes --track-origins=yes --leak-check=full --show-leak-kinds=all ./a.out
Let me be more specific for other readers (i had the same problem but my arguments were in the right order):
I found out that valgrind needs the path to the executable, if you dont give this then it will run bu it won't give you the line numbers.
In my case the executable was in a different directory, which was in my PATH, but to get the line information you have to run
valgrind --leak-check=full path_to_myprogram/myprogram
In order for valgrind to show the lines where the errors occurred in the file,
I had to add -g to the END of my compile command.
For Example:
gcc -o main main.c -g
Then just run valgrind:
valgrind --leak-check=full --show-leak-kinds=all --track-origins=yes ./main
Killing the valgrind process itself leaves no report on the inner process' execution.
Is it possible to send a terminate signal to a process running inside valgrind?
There is no "inner process" as both valgrind itself and the client program it is running execute in a single process.
Signals sent to that process will be delivered to the client program as normal. If the signal causes the process to terinate then valgrind's normal exit handlers will run and (for example) report any leaks.
So, for example, if we start valgrind on a sleep command:
bericote [~] % valgrind sleep 240
==9774== Memcheck, a memory error detector
==9774== Copyright (C) 2002-2010, and GNU GPL'd, by Julian Seward et al.
==9774== Using Valgrind-3.6.1 and LibVEX; rerun with -h for copyright info
==9774== Command: sleep 240
==9774==
then kill that command:
bericote [~] % kill -TERM 9774
then the process will exit and valgrind's exit handlers will run:
==9774==
==9774== HEAP SUMMARY:
==9774== in use at exit: 0 bytes in 0 blocks
==9774== total heap usage: 30 allocs, 30 frees, 3,667 bytes allocated
==9774==
==9774== All heap blocks were freed -- no leaks are possible
==9774==
==9774== For counts of detected and suppressed errors, rerun with: -v
==9774== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 6 from 6)
[1] 9774 terminated valgrind sleep 240
The only exception would be for kill -9 as in that case the process is killed by the kernel without ever being informed of the signal so valgrind has no opportunity to do anything.