Is it possible to set a baseline memory usage in valgrind for leak detection? - valgrind

Is there a way to tell valgrind from inside my code when to start and when to stop checking for memory leaks?
I am using a legacy testing framework which must link with my testing program in order to run. The framework has memory leaks in it - valgrind shows about 50KB of memory that has not been released, but is reachable via heuristic. This is annoying, because I must keep this number in mind to see how much memory is leaked from my code. It would be a lot more convenient if I could tell valgrind to start collecting memory stats when my first test begins, and stop collecting when the last test is over. Is there an API for it?

valgrind memcheck allows to do a "differential" leak search. The differential leak search reports the delta between the previous leak search and the current situation.
You can do such a differential leak search using monitor commands with vgdb, either from the shell or from gdb. See https://www.valgrind.org/docs/manual/mc-manual.html#mc-manual.monitor-commands.
You can also use the client request VALGRIND_DO_CHANGED_LEAK_CHECK from your program, see https://www.valgrind.org/docs/manual/mc-manual.html#mc-manual.clientreqs.

Related

Memory allocation fails when using Valgrind

I am trying to debug an embedded application with Valgrind.
Unfortunately, this application behaves differently than when I run it without Valgrind. At one point a driver allocates a data block of about 4 MB. This allocation fails even though there is still about 90 MB of memory available. Could it be that Valgrind fragments the memory so much that no contiguous block of that size is available anymore?
Does anyone have an idea how to remedy this?

How to programmatically purge/clean cocoa application memory?

I'm working on a Mac app. Initially monitoring Xcode's memory report while I ran my app showed showed the memory was just ramping up crazy. I used Instruments and profiled my app for allocations and leaks. Turned out there wasn't much leaked memory as you would expect due to strong reference cycles etc. However there was a lot of abandoned memory. By following the stack trace that lead to my code I have fixed by 70% using autorelease pools etc. Still the remaining 30% of abandoned memory seems to point system calls.
Now I have two questions based on that I have two questions
1) I want to fix the remaining 30%. How can I get rid of abandoned memory? I have already used Instruments and know exactly where those system calls are spawned but still dont know what to do to have that memory be cleaned up. (using ARC no manual retain/ release and autorelease doesn't seem to make a diff.)
2) After I know whatever my application was doing has completed and there is no need for any memory to be there (just like the application first started) I want to get rid of all memory that my app has used up. This I plan to use as a brute force approach to clean up all memory just like the system would if the user closes the app or turns off the system.
Basically if I know where my apps memory is in the file system I'll just programmatically call purge command on that or something similar. Because at this point I'm 100% sure nothing needs to be in memory for the app except for the first screen that you would expect the first time you launch the app.
I read this, this, this and this but they weren't helpful.

How to measure Valgrind's memory usage?

I have an application written in C which uses the zmalloc (borrowed from Redis) memory wrapper to keep track of the total dynamic allocated memory by my program. I am also using Valgrind on Linux to find memory leaks and invalid memory accesses.
The problem is that zmalloc and top show totally different memory usage reports when I am using Valgrind. This makes me think that Valgrind itself is consuming too much memory.
How do I measure Valgrind's memory usage?
valgrind tools such as memcheck or helgrind use a lot of memory for
tracking various aspects of your program.
So, it is normal that top shows a lot more memory than what your program
allocates itself.
If you want to have an idea about the memory used by valgrind, you can do:
valgrind --stats=yes ...
The lines following
------ Valgrind's internal memory use stats follow ------
will give some info about valgrind memory usage.
Use valgrind --profile-heap=yes ... to have detailed memory use.
Note that if you do not use the standard malloc library, you might need to use the option --soname-synonyms=... to have tools such as memcheck or helgrind working properly.
to

Meaning of clearance of generation analysis memory growth on simulator "simulate memory warning" event

I was trying to debug the memory growth in generation analysis and was frustrated (Lots of objects that was the result of call toCGGlyphBitmapCreate was not being released) . Then, I ran the program on simulator and captured many generation snapshots and then I did a simulate memory warning. Almost every generation cleared to zero ( a few had a few bytes here and there). Does that mean my code is fine and I should not worry about it? How can I prevent the growth so that it wont have to wait until a simulate memory warning event to clear the growth? (By the way, all these growth was caused by system libraries)
If the memory is getting released upon memory warning, then you're probably OK. The OS will cache all sorts of stuff (that it will free/reuse as it sees fit) that you don't generally have to be concerned about.
Still, I would run the code through the static analyzer (press shift+command+B in Xcode or select "Analyze" on the Xcode "Product" menu) just to be safe.

How does valgrind work?

Can someone provide a quick top level explanation of how Valgrind works? An example: how does it know when memory is allocated and freed?
Valgrind basically runs your application in a "sandbox." While running in this sandbox, it is able to insert its own instructions to do advanced debugging and profiling.
From the manual:
Your program is then run on a synthetic CPU provided by the Valgrind core. As new code is executed for the first time, the core hands the code to the selected tool. The tool adds its own instrumentation code to this and hands the result back to the core, which coordinates the continued execution of this instrumented code.
So basically, valgrind provides a virtual processor that executes your application. However, before your application instructions are processed, they are passed to tools (such as memcheck). These tools are kind of like plugins, and they are able to modify your application before it is run on the processor.
The great thing about this approach is that you don't have to modify or relink your program at all to run it in valgrind. It does cause your program to run slower, however valgrind isn't meant to measure performance or run during normal execution of your application, so this isn't really an issue.
Valgrind is a Dynamic Binary Analysis (DPA) tool that uses Dynamic Binary Instrumentation (DPI) framework to check memory allocation, to detect deadlocks and to profile the applications. DPI framework has its own low level memory manager, scheduler, thread handler and signal handler. Valgrind tool suite includes tool like
Memcheck - tracks the memory allocation dynamically and reports memory leaks.
Helgrind - detects and reports dead locks, potential data races and lock reversals.
Cachegrind - simulates how the application interacts with system cache and provides information about cache misses.
Nulgrind - a simple valgrind that never do any analysis. Used by developers for performance benchmark.
Massif - a tool to analyse the heap memory usage of the application.
Valgrind tool uses disassemble and resynthesize mechanism where it loads the application into a process, disassembles the application code, add the instrumentation code for analysis, assembles it back and executes the application. It uses Just Intime Compiler (JIT) to embed the application with the instrumentation code.
Valgrind Tool = Valgrind Core + Tool Plugin
Valgrind Core disassembles the application code and passes the code fragment to tool plugin for instrumentation. The tool plugin adds the analysis code and assembles it back. Thus, Valgrind provides the flexibility to write our own tool on top of the Valgrind framework. Valgrind uses shadow registers and shadow memory to instrument read/write instructions, read/write system call, stack and heap allocations.
Valgrind provides wrappers around the system call and registers for pre and post callbacks for every system call to track the memory accessed as part of the system call. Thus, Valgrind is a OS abstraction layer between Linux Operating system and client application.
The diagram illustrates the 8 phases of Valgrind :
valgrind sits as a layer between your program and the OS, intercepting calls to the OS requesting memory (de)allocation and recording what is being manipulated before then actually allocating the memory and passing back an equivalent. It's essentially how most code profilers work, except at a much lower level (system calls instead of program function calls).
Here you can find some nice info:
Valgrind: A Framework for Heavyweight Dynamic Binary Instrumentation
http://valgrind.org/docs/manual/manual-core-adv.html#manual-core-adv.wrapping
Besides familiarize yourself with LD_PRELOAD.
Valgrind is basically a virtual machine that executes your program. It is a virtual architecture that intercepts each call to allocate/free memory.