Memory allocation fails when using Valgrind - embedded

I am trying to debug an embedded application with Valgrind.
Unfortunately, this application behaves differently than when I run it without Valgrind. At one point a driver allocates a data block of about 4 MB. This allocation fails even though there is still about 90 MB of memory available. Could it be that Valgrind fragments the memory so much that no contiguous block of that size is available anymore?
Does anyone have an idea how to remedy this?

Related

Is it possible to set a baseline memory usage in valgrind for leak detection?

Is there a way to tell valgrind from inside my code when to start and when to stop checking for memory leaks?
I am using a legacy testing framework which must link with my testing program in order to run. The framework has memory leaks in it - valgrind shows about 50KB of memory that has not been released, but is reachable via heuristic. This is annoying, because I must keep this number in mind to see how much memory is leaked from my code. It would be a lot more convenient if I could tell valgrind to start collecting memory stats when my first test begins, and stop collecting when the last test is over. Is there an API for it?
valgrind memcheck allows to do a "differential" leak search. The differential leak search reports the delta between the previous leak search and the current situation.
You can do such a differential leak search using monitor commands with vgdb, either from the shell or from gdb. See https://www.valgrind.org/docs/manual/mc-manual.html#mc-manual.monitor-commands.
You can also use the client request VALGRIND_DO_CHANGED_LEAK_CHECK from your program, see https://www.valgrind.org/docs/manual/mc-manual.html#mc-manual.clientreqs.

IntelliJ uses more memory than allocated

My IntelliJ goes unbearably slow, so I was fiddling with memory settings. If you select Help -> Change Memory Settings, you can set the max heap size for IntelliJ. But even after restarting, then running Mac's Activity Monitor, I see it using 5.5GB even though I set the heap to 4092MB.
It's using 1.5GB more than allocated for heap. That's a lot of memory for permgen + stack, don't you think? Or, could it be that this memory setting actually doesn't have any effect on the program?
It's the virtual memory you see, it may also include memory mapped files and many other things occupied by the JVM internals, plus the native libraries for a dozen of Apple frameworks loaded into the process. There is nothing to worry about unless you get OOM or IDE becomes slow.
If it happens, refer to the KB documents and report the issues to YouTrack with the CPU/Memory snapshots.

Java 8 Application using all of System RAM and then crashing with a SIGBUS. Whats going on here?

I have a Java 8 Application that takes in messages over the network and writes to multiple Memory Mapped files using Java NIO MappedByteBuffer. I have a reader that reads messages simultaneously from these files in order and deletes read files again using MappedByteBuffer. All is smooth until I have written and read about 246 GB of data and my application crashes with the following
[thread 139611281577728 also had an error][thread 139611278419712 also had an error][thread 139611282630400 also had an error][thread 139611277367040 also had an error][thread 139611283683072 also had an error][thread 139611279472384 also had an error]
[thread 139611280525056 also had an error]
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGBUS (0x7) at pc=0x00007f02d10526de, pid=44460, tid=0x00007ef9c9088700
#
# JRE version: Java(TM) SE Runtime Environment (8.0_101-b13) (build 1.8.0_101-b13)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.101-b13 mixed mode linux-amd64 )
# Problematic frame:
# v ~StubRoutines::jint_disjoint_arraycopy
#
# Core dump written. Default location: /home/user/core or core.44460
#
# An error report file with more information is saved as:
# /home/user/hs_err_pid44460.log
The hs_err_pid44460.log is empty and the core dump core.44460 is about 246 GB in size and full of the messages I am trying to write.
I am running with a Max Heap size of 32 GB. As per JConsole, I run out of Free Physical Memory and then crash.
Why am I running out of RAM? Am i forgetting to close some file handle / not closing my MMapped files correctly?
Even though your program may indeed be correct in its usage of MappedByteBuffers, please note that at an high allocation pace you could incur phenomena due to untimely deallocation of said buffers, which is ultimately a responsibility of JVM and should occur only during garbage collection of buffers. In short, freeing of buffer will ultimately succeed but when it will happen should be hardly predictable.
You could, however, force deallocation ("cleaning") of memory allocated to buffers using JVM's Cleaner functionality (class sun.misc.Cleaner). Please refer to this SO question for some directions but, long story short, you simply should call Cleaner.clean() on your throwaway buffers as early as possible, in order to reduce memory allocation figures and support effectively your use case.
You'll have to look at the virtual memory footprint or memory mapping of the process to figure out whether direct buffers might be the culprit.
If it is indeed crashing due to mapped or direct buffers then you're either leaking them (running heap dumps through a memory analyzer can identify those) or the GC is running too infrequently to release them.
You might also find a more aggressive garbage collection will help.
Also might like to try this option which was introduced in J7:
-XX:+UseG1GC
-XX:ParallelGCThreads=4 This will allow for 4 threads to GC in parallel
There are a number of good articles about more tuning your garbage collector heres one I found useful (https://blogs.oracle.com/java-platform-group/entry/g1_from_garbage_collector_to)
Hope this helps.

How to measure Valgrind's memory usage?

I have an application written in C which uses the zmalloc (borrowed from Redis) memory wrapper to keep track of the total dynamic allocated memory by my program. I am also using Valgrind on Linux to find memory leaks and invalid memory accesses.
The problem is that zmalloc and top show totally different memory usage reports when I am using Valgrind. This makes me think that Valgrind itself is consuming too much memory.
How do I measure Valgrind's memory usage?
valgrind tools such as memcheck or helgrind use a lot of memory for
tracking various aspects of your program.
So, it is normal that top shows a lot more memory than what your program
allocates itself.
If you want to have an idea about the memory used by valgrind, you can do:
valgrind --stats=yes ...
The lines following
------ Valgrind's internal memory use stats follow ------
will give some info about valgrind memory usage.
Use valgrind --profile-heap=yes ... to have detailed memory use.
Note that if you do not use the standard malloc library, you might need to use the option --soname-synonyms=... to have tools such as memcheck or helgrind working properly.
to

Working of the valgrind tool suite

I had run valgrind on a sample daemon program. The parent exits after allocating a chunk of 1000B, but the child that runs on the background keeps on allocating 200B of memory on the heap through malloc, after every two seconds.
My question is: does valgrind execute the program on the actual processor, or on a synthetic CPU?
Does it allocate the memory on the actual heap or on a synthetic RAm which doesn't exist?
Since I let the program run for a quite a long duration so much so that the child allocated some 2GB of memory on the heap. On implementing the program on massif, I got one output file for the parent, and on killing the daemon process, I got another massif.out. for the child which showed the allocation of the memory on the heap.
Valgrind run program in its own synthetic CPU, nothing from the program machine code reaches the host CPU.
Memory allocation is hooked with Memcheck, if you use it, otherwise Valgrind calls the libc memory allocation routines.
This facts may complicate Valgrind debugging of system services, indeed.
If you turn on the memcheck(which is the default), then Valgrind will manage the heap, i.e. all the memory related methods (malloc/free/memmove etc.) will be replaced by Valgrind's version of the corresponding methods.
As already told, your program is running on virtual CPU created and managed by valgrind.
There is no notion of synthetic RAM as far I know. In any case, all this is very transparent to the running process(your daemon) and shoudl not change the behavior of your program in any way.
So the answer is YES for synthetic CPU and no for synthetic RAM.