Valgrind not printing inline error messages, just heap and leak summaries - valgrind

I'm using the command:
valgrind --tool=memcheck --leak-check=yes ./prog
When this runs with a test script, I get no inline error messages or warnings, I just get a Heap Summary and a leak summary.
Am I missing a flag or something?
==31420== HEAP SUMMARY:
==31420== in use at exit: 1,580 bytes in 10 blocks
==31420== total heap usage: 47 allocs, 37 frees, 7,132 bytes allocated
==31420==
==31420== 1,580 (1,440 direct, 140 indirect) bytes in 5 blocks are definitely lost in loss record 2 of 2
==31420== at 0x4C274A8: malloc (vg_replace_malloc.c:236)
==31420== by 0x400FD4: main (lab1.c:51)
==31420==
==31420== LEAK SUMMARY:
==31420== definitely lost: 1,440 bytes in 5 blocks
==31420== indirectly lost: 140 bytes in 5 blocks
==31420== possibly lost: 0 bytes in 0 blocks
==31420== still reachable: 0 bytes in 0 blocks
==31420== suppressed: 0 bytes in 0 blocks
==31420==
==31420== For counts of detected and suppressed errors, rerun with: -v
==31420== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 4 from 4)
The last time I used valgrind (a few days ago) it would print out error messages as they occurred, in addition to the heap and leak summaries.
EDIT:
I tried leak-check=full, same result
The line mentioned in the heap summary (lab1.c:51) is:
temp_record = malloc(sizeof(struct server_record));
And I use this pointer pretty often in my code. That is what was so helpful about the valgrind error messages before, they would show me when I would lose my pointer to this malloc or other problems.

Related

VxWorks allocates more memory than requested

I have a doubt regarding memory allocation in VXWorks.
It looks like VxWorks allocates a few bytes more than requested
Scenario 1:
I request for 64 bytes. Vxworks allocates 66 bytes. Diff of 2 bytes
Scenario 2:
I request for memory of 88 bytes. Vxworks allcoates 96 bytes.Diff of 8 bytes.

valgrind output points to gfortran library

I run valgrind-3.10.0 to search for memory leaks in my fortran program. I'm using gfortran-4.9.0 to compile on OS X 10.9.5. From what I can tell from the below output, the memory leak is in a gfortran library. Am I correct? If so, is there anything that I can do?
HEAP SUMMARY:
==30650== in use at exit: 25,727 bytes in 390 blocks
==30650== total heap usage: 34,130 allocs, 33,740 frees, 11,306,357 bytes allocated
==30650==
==30650== Searching for pointers to 390 not-freed blocks
==30650== Checked 9,113,592 bytes
==30650==
==30650== 72 (36 direct, 36 indirect) bytes in 1 blocks are definitely lost in loss record 52 of 84
==30650== at 0x47E1: malloc (vg_replace_malloc.c:300)
==30650== by 0x345AB0: __Balloc_D2A (in /usr/lib/system/libsystem_c.dylib)
==30650== by 0x345CF6: __i2b_D2A (in /usr/lib/system/libsystem_c.dylib)
==30650== by 0x34362E: __dtoa (in /usr/lib/system/libsystem_c.dylib)
==30650== by 0x36A8A9: __vfprintf (in /usr/lib/system/libsystem_c.dylib)
==30650== by 0x3912DA: __v2printf (in /usr/lib/system/libsystem_c.dylib)
==30650== by 0x376F66: _vsnprintf (in /usr/lib/system/libsystem_c.dylib)
==30650== by 0x376FC5: vsnprintf_l (in /usr/lib/system/libsystem_c.dylib)
==30650== by 0x3674DC: snprintf (in /usr/lib/system/libsystem_c.dylib)
==30650== by 0xE2F6D: write_float (in /usr/local/gfortran/lib/libgfortran.3.dylib)
==30650== by 0xE53A4: _gfortrani_write_real (in /usr/local/gfortran/lib/libgfortran.3.dylib)
==30650== by 0x3FA9999999999999: ???
==30650==
==30650== LEAK SUMMARY:
==30650== definitely lost: 36 bytes in 1 blocks
==30650== indirectly lost: 36 bytes in 1 blocks
==30650== possibly lost: 0 bytes in 0 blocks
==30650== still reachable: 316 bytes in 7 blocks
==30650== suppressed: 25,339 bytes in 381 blocks
==30650== Reachable blocks (those to which a pointer was found) are not shown.
==30650== To see them, rerun with: --leak-check=full --show-leak-kinds=all
==30650==
==30650== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 15 from 15)
--30650--
--30650-- used_suppression: 34 OSX109:6-Leak /usr/local/lib/valgrind/default.supp:797 suppressed: 13,656 bytes in 252 blocks
--30650-- used_suppression: 1 OSX109:1-Leak /usr/local/lib/valgrind/default.supp:747 suppressed: 2,064 bytes in 1 blocks
--30650-- used_suppression: 13 OSX109:7-Leak /usr/local/lib/valgrind/default.supp:808 suppressed: 7,181 bytes in 78 blocks
--30650-- used_suppression: 11 OSX109:10-Leak /usr/local/lib/valgrind/default.supp:839 suppressed: 1,669 bytes in 29 blocks
--30650-- used_suppression: 10 OSX109:9-Leak /usr/local/lib/valgrind/default.supp:829 suppressed: 609 bytes in 15 blocks
--30650-- used_suppression: 5 OSX109:5-Leak /usr/local/lib/valgrind/default.supp:787 suppressed: 144 bytes in 5 blocks
--30650-- used_suppression: 1 OSX109:3-Leak /usr/local/lib/valgrind/default.supp:765 suppressed: 16 bytes in 1 blocks
==30650==
==30650== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 15 from 15)
This could very well be a bug in the gfortran library.
Your best bet would be to reduce this to a self-contained test case and report it to the gfortran developers at fortran#gcc.gnu.org or to submit a bug report at http://www.gnu.org/bugzilla .

dump valgrind data

I am using valgrind on a program which runs an infinite loop.
As memcheck displays the memory leaks after the end of the program, but as my program has infinite loop it will never end.
So is there any way i can forcefully dump the data from valgrind time to time.
Thanks
Have a look at the client requests feature of memcheck. You can probably use VALGRIND_DO_LEAK_CHECK or similar.
EDIT:
In response to the statement above that this doesn't work. Here is an example program which loops forever:
#include <valgrind/memcheck.h>
#include <unistd.h>
#include <cstdlib>
int main(int argc, char* argv[])
{
while(true) {
char* leaked = new char[1];
VALGRIND_DO_LEAK_CHECK;
sleep(1);
}
return EXIT_SUCCESS;
}
When I run this in valgrind, I get an endless output of new leaks:
$ valgrind ./a.out
==16082== Memcheck, a memory error detector
==16082== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al.
==16082== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright info
==16082== Command: ./a.out
==16082==
==16082== LEAK SUMMARY:
==16082== definitely lost: 0 bytes in 0 blocks
==16082== indirectly lost: 0 bytes in 0 blocks
==16082== possibly lost: 0 bytes in 0 blocks
==16082== still reachable: 1 bytes in 1 blocks
==16082== suppressed: 0 bytes in 0 blocks
==16082== Reachable blocks (those to which a pointer was found) are not shown.
==16082== To see them, rerun with: --leak-check=full --show-reachable=yes
==16082==
==16082== 1 bytes in 1 blocks are definitely lost in loss record 2 of 2
==16082== at 0x4C2BF77: operator new[](unsigned long) (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==16082== by 0x4007EE: main (testme.cc:9)
==16082==
==16082== LEAK SUMMARY:
==16082== definitely lost: 1 bytes in 1 blocks
==16082== indirectly lost: 0 bytes in 0 blocks
==16082== possibly lost: 0 bytes in 0 blocks
==16082== still reachable: 1 bytes in 1 blocks
==16082== suppressed: 0 bytes in 0 blocks
==16082== Reachable blocks (those to which a pointer was found) are not shown.
==16082== To see them, rerun with: --leak-check=full --show-reachable=yes
==16082==
==16082== 2 bytes in 2 blocks are definitely lost in loss record 2 of 2
==16082== at 0x4C2BF77: operator new[](unsigned long) (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==16082== by 0x4007EE: main (testme.cc:9)
==16082==
==16082== LEAK SUMMARY:
==16082== definitely lost: 2 bytes in 2 blocks
==16082== indirectly lost: 0 bytes in 0 blocks
==16082== possibly lost: 0 bytes in 0 blocks
==16082== still reachable: 1 bytes in 1 blocks
==16082== suppressed: 0 bytes in 0 blocks
==16082== Reachable blocks (those to which a pointer was found) are not shown.
==16082== To see them, rerun with: --leak-check=full --show-reachable=yes
The program does not terminate.
with valgrind 3.7.0, you can trigger (a.o.) leak search from the shell,
using vgdb.
See e.g. http://www.valgrind.org/docs/manual/mc-manual.html#mc-manual.monitor-commands
(you can do these monitor commands from gdb or from a shell command line, using vgdb).
Use of VALGRIND_DO_LEAK_CHECK (acm answer) works for me.
Remarks :
- Program has to be launch with valgrind (valgrind myProg ...)
- valgrind-devel package has to be installed (to have )

JVM fails to allocate XMS under Suse SLES10 X64 running on VMWare ESX

I am trying to allocate ram with xms = xmx on a sles10 x64 running under VMware.
When stopping the JVM the following error is thrown:
Java HotSpot(TM) 64-Bit Server VM warning: Failed to reserve shared memory (errno = 12).
The RAM of the VM is 8 GB and they are reserved.
The VM sees 8GB and it can be allocated during runtime via the XMX setting.
On another Virtual SLES10 with 16 GB RAM Reserved via VMWare I don't have a problem with allocation of RAM even when setting the hugepages and shmax only by echo it works fine.
echo 8000 > /proc/sys/vm/nr_hugepages
echo 8589934592 > /proc/sys/kernel/shmmax
Using the echo commands on the other SLES10 show no effect in /proc/meminfo at all.
here are my configs 1st on is the SLES10 where XMS fails to allocate.
# more /apps/liferay-portal-5.2.5/tomcat-5.5.27/bin/setenv.sh
JAVA_HOME=/apps/java5
JRE_HOME=/apps/java5
JAVA_OPTS="$JAVA_OPTS -Xms3G -Xmx3G -XX:NewRatio=3 -XX:MaxPermSize=256m -XX:SurvivorRatio=20 -Dsun.rmi.dgc.client.gcInterval=1800000 -Dsun.rmi.dgc.server.gcInterval=1800000 -XX:+UsePa
rallelGC -XX:ParallelGCThreads=4 -XX:+UseLargePages -Xloggc:/apps/gc.log -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime -XX:+PrintGC -XX:+PrintGCTimeStamps -
XX:+PrintGCDetails -Dfile.encoding=UTF8 -Duser.timezone=GMT+2 -Djava.security.auth.login.config=$CATALINA_HOME/conf/jaas.config -Dorg.apache.catalina.loader.WebappClassLoader.ENABLE_C
LEAR_REFERENCES=false"
more /etc/sysctl.conf
kernel.shmmax=7516192768
vm.nr_hugepages=3072
vm.hugetlb_shm_group=1000
more /etc/securtiy/limits.conf
#
#
#* soft core 0
#* hard rss 10000
##student hard nproc 20
##faculty soft nproc 20
##faculty hard nproc 50
#ftp hard nproc 0
##student - maxlogins 4
* soft memlock unlimited
* hard memlock unlimited
tomcat soft memlock 6291456
tomcat hard memlock 6291456
# End of file
# cat /proc/meminfo
MemTotal: 7928752 kB
MemFree: 737004 kB
Buffers: 0 kB
Cached: 417368 kB
SwapCached: 0 kB
Active: 487428 kB
Inactive: 324072 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 7928752 kB
LowFree: 737004 kB
SwapTotal: 2097144 kB
SwapFree: 2097020 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages: 397208 kB
Mapped: 72180 kB
Slab: 62136 kB
CommitLimit: 2915792 kB
Committed_AS: 748576 kB
PageTables: 3292 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 7028 kB
VmallocChunk: 34359731271 kB
HugePages_Total: 3072
HugePages_Free: 2305
HugePages_Rsvd: 897
Hugepagesize: 2048 kB
# ipcs -l
Shared Memory Limits --------
max number of segments = 4096
max seg size (kbytes) = 7340032
max total shared memory (kbytes) = 4611686018427386880
min seg size (bytes) = 1
Semaphore Limits --------
max number of arrays = 1024
max semaphores per array = 250
max semaphores system wide = 256000
max ops per semop call = 32
semaphore max value = 32767
Messages: Limits --------
max queues system wide = 16
max size of message (bytes) = 65536
default max size of queue (bytes) = 65536
# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
pending signals (-i) 65536
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
On the Second VM it looks like this
cat /proc/meminfo
MemTotal: 16190448 kB
MemFree: 176812 kB
Buffers: 52752 kB
Cached: 755256 kB
SwapCached: 0 kB
Active: 713808 kB
Inactive: 425300 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 16190448 kB
LowFree: 176812 kB
SwapTotal: 35658896 kB
SwapFree: 35658796 kB
Dirty: 932 kB
Writeback: 0 kB
AnonPages: 333620 kB
Mapped: 79120 kB
Slab: 37492 kB
CommitLimit: 36356744 kB
Committed_AS: 646284 kB
PageTables: 3584 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 23500 kB
VmallocChunk: 34359713907 kB
HugePages_Total: 7224
HugePages_Free: 6654
HugePages_Rsvd: 582
Hugepagesize: 2048 kB
JAVA_OPTS="$JAVA_OPTS -Xms2G -Xmx2G -XX:NewRatio=3 -XX:MaxPermSize=256m -XX:SurvivorRatio=20 -Dsun.rmi.dgc.client.gcInterval=1800000 -Dsun.rmi.dgc.server.gcI
nterval=1800000 -XX:+UseParallelGC -XX:ParallelGCThreads=2 -XX:+UseLargePages -Xloggc:/apps/gc.log -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplication
ConcurrentTime -XX:+PrintGC -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -Dfile.encoding=UTF8 -Duser.timezone=GMT+2 -Djava.security.auth.login.config=$CATALINA
_HOME/conf/jaas.config -Dorg.apache.catalina.loader.WebappClassLoader.ENABLE_CLEAR_REFERENCES=false"
hepide01pep1:~ # ipcs -l
------ Shared Memory Limits --------
max number of segments = 4096
max seg size (kbytes) = 8388608
max total shared memory (kbytes) = 4611686018427386880
min seg size (bytes) = 1
------ Semaphore Limits --------
max number of arrays = 1024
max semaphores per array = 250
max semaphores system wide = 256000
max ops per semop call = 32
semaphore max value = 32767
------ Messages: Limits --------
max queues system wide = 16
max size of message (bytes) = 65536
default max size of queue (bytes) = 65536
Did you tried with the less size of heap.. may be with 2gig. You can just do simple try with
java -Xmx3G -version .Let us know how it goes and what it spit out.
I have stumbled with this issue (errno 12) on CentOS 5.9 as well using 16G heaps.
After verifying hard / soft memory locks were unlimited in /etc/security/limits.conf and still getting the error, I started running java -version as suggested by Anil, with all of my JAVA_OPTS intact.
I have found that removing the "-XX:+UseLargePages" option gets rid of that error.
I hope this helps you!

valgrind, profiling timer expired?

I try to profile a simple c prog using valgrind:
[zsun#nel6005001 ~]$ valgrind --tool=memcheck ./fl.out
==2238== Memcheck, a memory error detector
==2238== Copyright (C) 2002-2009, and GNU GPL'd, by Julian Seward et al.
==2238== Using Valgrind-3.5.0 and LibVEX; rerun with -h for copyright info
==2238== Command: ./fl.out
==2238==
==2238==
==2238== HEAP SUMMARY:
==2238== in use at exit: 1,168 bytes in 1 blocks
==2238== total heap usage: 1 allocs, 0 frees, 1,168 bytes allocated
==2238==
==2238== LEAK SUMMARY:
==2238== definitely lost: 0 bytes in 0 blocks
==2238== indirectly lost: 0 bytes in 0 blocks
==2238== possibly lost: 0 bytes in 0 blocks
==2238== still reachable: 1,168 bytes in 1 blocks
==2238== suppressed: 0 bytes in 0 blocks
==2238== Rerun with --leak-check=full to see details of leaked memory
==2238==
==2238== For counts of detected and suppressed errors, rerun with: -v
==2238== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 12 from 8)
Profiling timer expired
The c code I am trying to profile is the following:
void forloop(void){
int fac=1;
int count=5;
int i,k;
for (i = 1; i <= count; i++){
for(k=1;k<=count;k++){
fac = fac * i;
}
}
}
"Profiling timer expired" shows up, what does it mean? How to solve this problem? thx!
The problem is that you are using valgrind on a program compiled with -pg. You cannot use valgrind and gprof together. The valgrind manual suggests using OProfile if you are on Linux and need to profile the actual emulation of the program under valgrind.
By the way, this isn't computing factorial.
If you're really trying to find out where the time goes, you could try stackshots. I put an infinite loop around your code and took 10 of them. Here's the code:
6: void forloop(void){
7: int fac=1;
8: int count=5;
9: int i,k;
10:
11: for (i = 1; i <= count; i++){
12: for(k=1;k<=count;k++){
13: fac = fac * i;
14: }
15: }
16: }
17:
18: int main(int argc, char* argv[])
19: {
20: int i;
21: for (;;){
22: forloop();
23: }
24: return 0;
25: }
And here are the stackshots, re-ordered with the most frequent at the top:
forloop() line 12
main() line 23
forloop() line 12 + 21 bytes
main() line 23
forloop() line 12 + 21 bytes
main() line 23
forloop() line 12 + 9 bytes
main() line 23
forloop() line 13 + 7 bytes
main() line 23
forloop() line 13 + 3 bytes
main() line 23
forloop() line 6 + 22 bytes
main() line 23
forloop() line 14
main() line 23
forloop() line 7
main() line 23
forloop() line 11 + 9 bytes
main() line 23
What does this tell you? It says that line 12 consumes about 40% of the time, and line 13 consumes about 20% of the time. It also tells you that line 23 consumes nearly 100% of the time.
That means unrolling the loop at line 12 might potentially give you a speedup factor of 100/(100-40) = 100/60 = 1.67x approximately. Of course there are other ways to speed up this code as well, such as by eliminating the inner loop, if you're really trying to compute factorial.
I'm just pointing this out because it's a bone-simple way to do profiling.
You are not going to be able to compute 10000! like that. You will need some sort of bignum implementation for computing factorials. This is because int is "usually" 4 bytes long which means that "usually" it can hold 2^32 - 1 (signed int, 2^31) - 13! is more than that. Even if you used an unsigned long ("usually" 8 bytes) you'd overflow by the time you reached 21!.
As for what it "profiling timer expired" means - it means valgrind received the signal SIGPROF: http://en.wikipedia.org/wiki/SIGPROF (probably means your program took too long).