Valgrind for macOS Mojave 10.14.2? Any alternatives? - valgrind

Ok, so I saw that someone asked this question over 4 months ago. But it has been a decent time since Mojave has been out. Does anyone know how to get it working or possibly any alternatives so that I can check my programs for memory leaks. I am a student, so cost does matter but this is a requirement for several of my classes. I would prefer not to have to use a virtual machine considering they never run well on Macs. Any suggestions would be great. Thanks.

You have few options here.
You can use XCode for development and run the code in Profile mode.
You can start Instruments and attach to process
You can run the code and use leaks to determine the leak size
> leaks 2419
Process: LeakingTheMemory [2419]
Path: /Users/USER/*/LeakingTheMemory
...
...
...
leaks Report Version: 4.0
Process 2419: 196 nodes malloced for 262162 KB
Process 2419: 26 leaks for 134217760 total leaked bytes.
26 (128M) << TOTAL >>
1 (64.0M) ROOT LEAK: 0x10b17c000 [67108864]
1 (32.0M) ROOT LEAK: 0x105726000 [33554432]
1 (16.0M) ROOT LEAK: 0x104726000 [16777216]
1 (8.00M) ROOT LEAK: 0x103f26000 [8388608]
1 (4.00M) ROOT LEAK: 0x103b26000 [4194304]
1 (2.00M) ROOT LEAK: 0x103926000 [2097152]
1 (1.00M) ROOT LEAK: 0x103826000 [1048576]
1 (512K) ROOT LEAK: 0x1037a6000 [524288]
1 (256K) ROOT LEAK: 0x103766000 [262144]
1 (128K) ROOT LEAK: 0x103746000 [131072]
1 (64.0K) ROOT LEAK: 0x103735000 [65536]
1 (32.0K) ROOT LEAK: 0x7fa354007800 [32768]
1 (16.0K) ROOT LEAK: 0x7fa354003800 [16384]
1 (8.00K) ROOT LEAK: 0x7fa354001800 [8192]
1 (4.00K) ROOT LEAK: 0x7fa354000800 [4096]
1 (2.00K) ROOT LEAK: 0x7fa354000000 [2048]
1 (1.00K) ROOT LEAK: 0x7fa353802000 [1024]
1 (512 bytes) ROOT LEAK: 0x7fa3535000a0 [512]
1 (256 bytes) ROOT LEAK: 0x7fa353402fa0 [256]
1 (128 bytes) ROOT LEAK: 0x7fa353500020 [128]
1 (64 bytes) ROOT LEAK: 0x7fa353600000 [64]
1 (32 bytes) ROOT LEAK: 0x7fa353402d40 [32]
1 (16 bytes) ROOT LEAK: 0x7fa353402eb0 [16]
1 (16 bytes) ROOT LEAK: 0x7fa353402ec0 [16]
1 (16 bytes) ROOT LEAK: 0x7fa353500000 [16]
1 (16 bytes) ROOT LEAK: 0x7fa353500010 [16]
You can use Malloc Debugging Features

Related

jcmd difference in heap usage numbers reported in jcmd commands heap_info v/s class_histogram v/s native.memory

can someone explain the difference in heap usage numbers reported by following commands on the same program around the same time? 
root#8fd0f20ba530:/# sudo -u xxx jcmd 477 GC.heap_info
477:
PSYoungGen total 934912K, used 221097K [0x0000000670700000, 0x00000006b2180000, 0x00000007c0000000)
eden space 933376K, 23% used [0x0000000670700000,0x000000067dd9dfd8,0x00000006a9680000)
from space 1536K, 86% used [0x00000006b2000000,0x00000006b214c538,0x00000006b2180000)
to space 32256K, 0% used [0x00000006ae280000,0x00000006ae280000,0x00000006b0200000)
ParOldGen total 672256K, used 244738K [0x00000003d1400000, 0x00000003fa480000, 0x0000000670700000)
object space 672256K, 36% used [0x00000003d1400000,0x00000003e0300928,0x00000003fa480000)
Metaspace used 112665K, capacity 119872K, committed 119936K, reserved 1155072K
class space used 13760K, capacity 15212K, committed 15232K, reserved 1048576K
root#8fd0f20ba530:/# sudo -u xxxjcmd 477 VM.native_memory
477:
Native Memory Tracking:
Total: reserved=18811604KB, committed=2677784KB
- Java Heap (reserved=16494592KB, committed=1615360KB)
(mmap: reserved=16494592KB, committed=1615360KB)
- Class (reserved=1167716KB, committed=132580KB)
(classes #21174)
(malloc=12644KB #49240)
(mmap: reserved=1155072KB, committed=119936KB)
- Thread (reserved=217029KB, committed=217029KB)
(thread #211)
(stack: reserved=215880KB, committed=215880KB)
(malloc=715KB #1260)
(arena=434KB #416)
- Code (reserved=266396KB, committed=96164KB)
(malloc=16796KB #20839)
(mmap: reserved=249600KB, committed=79368KB)
- GC (reserved=613051KB, committed=563831KB)
(malloc=10419KB #1196)
(mmap: reserved=602632KB, committed=553412KB)
- Compiler (reserved=721KB, committed=721KB)
(malloc=587KB #1723)
(arena=135KB #7)
- Internal (reserved=19877KB, committed=19877KB)
(malloc=19845KB #29606)
(mmap: reserved=32KB, committed=32KB)
- Symbol (reserved=26534KB, committed=26534KB)
(malloc=22914KB #244319)
(arena=3620KB #1)
- Native Memory Tracking (reserved=5483KB, committed=5483KB)
(malloc=29KB #331)
(tracking overhead=5454KB)
- Arena Chunk (reserved=203KB, committed=203KB)
(malloc=203KB)
root#8fd0f20ba530:/# sudo -u xxx jcmd 477 GC.class_histogram | more
477:
num #instances #bytes class name
----------------------------------------------
1: 159930 57884816 [C
2: 10625 10583856 [B
3: 14628 5228552 [I
4: 134612 4307584 java.util.concurrent.ConcurrentHashMap$Node
.
.
.
9425: 1 16 sun.util.locale.provider.TimeZoneNameUtility$TimeZoneNameGetter
9426: 1 16 sun.util.resources.LocaleData
9427: 1 16 sun.util.resources.LocaleData$LocaleDataResourceBundleControl
Total 1325671 114470096
so now from above jcmd GC.class_histogram is showing heap used = 114M
but jcmd GC.heap_info is showing heap used = 221M + 244M = 465M and total = 934M + 672M = 1.6G
and jcmd VM.native_memory is showing heap committed = 2.6G 
using OpenJDK 64-Bit Server VM version 25.292-b10
any pointers to understand which numbers to follow? 

valgrind - total heap usage: 0 allocs, 0 frees, 0 bytes allocated

I run valgrind on binary always show as bellow even I have allocated memory using malloc.
==13775== HEAP SUMMARY:
==13775== in use at exit: 0 bytes in 0 blocks
==13775== total heap usage: 0 allocs, 0 frees, 0 bytes allocated
==13775==
==13775== All heap blocks were freed -- no leaks are possible
Please let me know solution if some faced this problem previously.
Usually, valgrind not seeing any malloc/free calls is due to one of the
following reasons:
1 the program is linked statically
2 the program is linked dynamically, but malloc/free library is static
3 malloc/free lib is dynamic, but it is a 'non standard' library (for example tcmalloc)
As ldd shows that you have some dynamic libraries, it is not reason 1.
So, it might be reason 2 or reason 3.
For both 2 and 3, you can make it work by using the option
--soname-synonyms=somalloc=....
See user manual for more details

valgrind more allocs than frees but no leaks

I have encountered a problem..
When i run the valgrind with my program i've got the following output and it confuse me:
==12919== HEAP SUMMARY:
==12919== in use at exit: 97,820 bytes in 1 blocks
==12919== total heap usage: 17 allocs, 16 frees, 99,388 bytes allocated
==12919==
==12919== LEAK SUMMARY:
==12919== definitely lost: 0 bytes in 0 blocks
==12919== indirectly lost: 0 bytes in 0 blocks
==12919== possibly lost: 0 bytes in 0 blocks
==12919== still reachable: 97,820 bytes in 1 blocks
==12919== suppressed: 0 bytes in 0 blocks
==12919== Reachable blocks (those to which a pointer was found) are not shown.
==12919== To see them, rerun with: --leak-check=full --show-reachable=yes
==12919==
==12919== For counts of detected and suppressed errors, rerun with: -v
==12919== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 6 from 6)
I found that it was caused by compilation with "-pg" flag. Without it all is ok!
From the valgrind faq
5.2. With Memcheck's memory leak detector, what's the difference between "definitely lost", "indirectly lost", "possibly lost", "still reachable", and "suppressed"?
The details are in the Memcheck section of the user manual.
In short:
"definitely lost" means your program is leaking memory -- fix those leaks!
"indirectly lost" means your program is leaking memory in a pointer-based structure. (E.g. if the root node of a binary tree is "definitely lost", all the children will be "indirectly lost".) If you fix the "definitely lost" leaks, the "indirectly lost" leaks should go away.
"possibly lost" means your program is leaking memory, unless you're doing unusual things with pointers that could cause them to point into the middle of an allocated block; see the user manual for some possible causes. Use --show-possibly-lost=no if you don't want to see these reports.
"still reachable" means your program is probably ok -- it didn't free some memory it could have. This is quite common and often reasonable. Don't use --show-reachable=yes if you don't want to see these reports.
"suppressed" means that a leak error has been suppressed. There are some suppressions in the default suppression files. You can ignore suppressed errors.

Size of ELF file vs size in RAM

I have an STM32 onto which I load ELF files in RAM (using OpenOCD and JTAG). So far, I haven't really been paying attention to the size of the ELF files that I load.
Normally, when I compile an ELF file that is too large for my board (my board has 128KB of RAM onto which the executable can be loaded) the linker complains (in the linker script I specify the size of the RAM).
Now that I notice the size of the outputted ELF file, I see that it is 261KB, and yet the linker has not complained!
Why is my ELF file so large, but my linker is fine with it? Is the ELF file on the host loaded exactly on the board?
No -- ELF contains things like relocation records that don't get loaded. It can also contain debug information (typically in DWARF format) that only gets loaded by a debugger.
You might want to use readelf to give you an idea of what one of your ELF files actually contains. You probably don't want to do it all the time, but doing it at least a few times to get some idea of what's there can give a much better idea of what you're dealing with.
readelf is part of the binutils package; chances are pretty decent you already have a copy that came with your other development tools.
If you want to get into even more detail, Googling for something like "ELF Format" should turn up lots of articles. Be aware, however, that ELF is a decidedly non-trivial format. If you decide you want to understand all the details, it'll take quite a bit of time and effort.
using the utility arm-none-eabi-size you can get a better picture of what actually gets used on the chip. The -A option will breakdown the size by section.
The relevant sections to look at when it comes to RAM are .data, .bss (static ram usage) and .heap (the heap: dynamic memory allocation by your program).
Roughly speaking, as long as the static ram size is below the RAM number from the datasheet, you should be able to run something on the chip and the linker shouldn't complain - your heap usage will then depends on your program.
Note: .text would be what needs to fit in the flash (the code).
example:
arm-none-eabi-size -A your-elf-file.elf
Sample output:
section size addr
.mstack 2048 536870912
.pstack 2304 536872960
.nocache 32 805322752
.eth 0 805322784
.vectors 672 134217728
.xtors 68 134610944
.text 162416 134611072
.rodata 23140 134773488
.ARM.exidx 8 134796628
.data 8380 603979776
.bss 101780 603988160
.ram0_init 0 604089940
.ram0 0 604089940
.ram1_init 0 805306368
.ram1 0 805306368
.ram2_init 0 805322784
.ram2 0 805322784
.ram3_init 0 805339136
.ram3 0 805339136
.ram4_init 0 939524096
.ram4 0 939524096
.ram5_init 0 536875264
.ram5 0 536875264
.ram6_init 0 0
.ram6 0 0
.ram7_init 0 947912704
.ram7 0 947912704
.heap 319916 604089940
.ARM.attributes 51 0
.comment 77 0
.debug_line 407954 0
.debug_info 3121944 0
.debug_abbrev 160701 0
.debug_aranges 14272 0
.debug_str 928595 0
.debug_loc 493671 0
.debug_ranges 146776 0
.debug_frame 51896 0
Total 5946701

Can valgrind output partial reports without having to quit the profiled application?

I want to check a long running process for memory leaks with valgrind. I suspect the memory leak I'm after might happen only after several hours of execution. I can run the app under valgrind and get the valgrind log just fine, but doing so means I have to quit the application and start it again anew for a new valgrind session for which I would still have to wait several hours. Is it possible to keep valgrind and the app running and still get valgrind's (partial) data at any point during execution?
You can do that by using the Valgrind gdbserver and GDB.
In short, you start your program with valgrind as usual, but with the --vgdb=yes switch:
$ valgrind --tool=memcheck --vgdb=yes ./a.out
In another session, you start gdb on the same executable, and connect to valgrind. You can then issue valgrind commands:
$ gdb ./a.out
...
(gdb) target remote | vgdb
....
(gdb) monitor leak_check full reachable any
==8677== 32 bytes in 1 blocks are definitely lost in loss record 1 of 2
==8677== at 0x4C28E3D: malloc (vg_replace_malloc.c:263)
==8677== by 0x400591: foo (in /home/me/tmp/a.out)
==8677== by 0x4005A7: main (in /home/me/tmp/a.out)
==8677==
==8677== 32 bytes in 1 blocks are definitely lost in loss record 2 of 2
==8677== at 0x4C28E3D: malloc (vg_replace_malloc.c:263)
==8677== by 0x400591: foo (in /home/me/tmp/a.out)
==8677== by 0x4005AC: main (in /home/me/tmp/a.out)
==8677==
==8677== LEAK SUMMARY:
==8677== definitely lost: 64 bytes in 2 blocks
==8677== indirectly lost: 0 bytes in 0 blocks
==8677== possibly lost: 0 bytes in 0 blocks
==8677== still reachable: 0 bytes in 0 blocks
==8677== suppressed: 0 bytes in 0 blocks
==8677==
(gdb)
See the manual for a list of commands, here for memcheck.