I have a ChibiOS application where I'm using dynamic memory allocation via malloc().
However, I observed that 100% of the time I call malloc(), it returns NULL. I have confirmed that:
The microcontroller memory is not full
The error also occurs for size-1 malloc calls, so the memory chunk size is not the cause of the issues.
errno is always ENOMEM after the malloc() call
How can I resolve this issue?
When you look at the definition of _sbrk in os/various/syscalls.c, you can clearly see that it always returns a ENOMEM error if CH_CFG_USE_MEMCORE == FALSE.
Unless you set CH_CFG_USE_MEMCORE = TRUE in chconf.h, the ChibiOS core memory manager is disabled completely and _sbrk and other memory-related functions are only included in the object files so no linking errors occur.
In order to properly configure ChibiOS correctly, ensure that the following is set in chconf.h:
#define CH_CFG_USE_MEMCORE TRUE
In order to avoid running into reliability issues, you might want to use memory pools or alternative algorithms instead where possible. See this detailed explanation for a description why malloc() is often a bad idea on embedded systems (it's actually forbidden in most embedded coding standards entirely).
Related
How can I fix the warning:
WARNING: Original variable <x> not released when freeing SCIP problem <my_solver>.
which is spammed once per x.
Option 1: Disable output.
Ref: https://www.scipopt.org/doc/html/PARAMETERS.php
Verblevel=0 disables stdout, but not errors / warnings. How can I tell SCIP to be quiet?
Option 2: Fix alleged memory leak.
This happens even though I am apparently freeing the problem, variables, constraints, and expressions correctly. I can verify this by attempting to free them another time, and watching the program explode.
This answer: http://listserv.zib.de/pipermail/scip/2020-December/004161.html implies this happens when using transform operations, which I am not doing.
I've also verified that there are no memory leaks using valgrind, though it claims there is some memory "still reachable", that memory does not grow no matter how many problems I set up and solve.
Per suggestions by #stefan, the simplest way to disable warning output is to use:
void SCIPsetMessagehdlrQuiet ( SCIP * scip, SCIP_Bool quiet )
Defined here (for 8.0): https://scipopt.org/doc/html/group__MessageOutputMethods.php#gadd04befbbea2ee42599ee26db33d52c9
Passing true did in fact disable those warnings.
I wrote vulkan code on my laptop that worked, and then I got a new laptop and now running it, the program aborts because vkAllocateDescriptorSets() returns VK_OUT_OF_HOST_MEMORY.
I doubt that it is actually out of memory, and I know it can allocate some memory because VkCreateInstance() doesn't fail like in this stack overflow post: Vulkan create instance VK_OUT_OF_HOST_MEMORY.
EDIT: Also, I forgot to mention, vkAllocateDescriptorSets() only returns VK_OUT_OF_HOST_MEMORY the second time I run it.
vkAllocateDescriptorSets allocates descriptors from a pool. So while such allocation could fail due to a lack of host/device memory, there are two other things that could cause failure. There may simply not be enough memory in the pool to allocate the number of descriptors/sets you asked for. Or there could be enough memory, but repeated allocations/deallocations have fragmented the pool such that the allocations cannot be made.
The case of allocating more descriptors/sets than are available should never happen. After all, you know how many descriptors&sets you put into that pool, so you should know exactly when you'll run out. This is an error state that a working application can guarantee it will never encounter. Though the VK_KHR_maintenance1 extension did add support for this circumstance: VK_ERROR_OUT_OF_POOL_MEMORY_KHR.
However, if you've screwed up your pool creation in some way, you will get this possibility. Of course, since there's no error code for it (outside of the extension), the implementation will have to provide a different error code: either host or device memory exhaustion.
But again, this is a programming error on your part, not something you should ever expect to see with working code. In particular, even if you request that extension, do not keep allocating from a pool until it stops giving you memory. That's just bad coding.
For the fragmentation case, they do have an error code: VK_ERROR_FRAGMENTED_POOL. However, the Khronos Group screwed up. See, the first few releases of Vulkan didn't include this error code; it was added later. Which means that implementations from before the adding of this error code (and likely afterwards) had to pick an inappropriate error code to return. Again, either host or device memory.
So you basically have to treat any failure of this function as either fragmentation, programming error (ie: you asked for more stuff than you put into the pool), or something else. In all cases, it's not something you can recover from at runtime.
Since it appeared to work once, odds are good that you probably just allocated more stuff than the pool contains. So you should make sure that you add enough stuff to the pool before allocating from it.
The problem was that I had not allocated enough memory in the pool. I solved it by creating multiple pools. One for each descriptor set.
I was running a test to make sure objects are being deallocated properly by wrapping the relevant code section in a 10 second long while loop. I ran the test in Debug and Release configurations with different results.
Debug (Build & Run in simulator):
Release (Build & Run on device, and Profile using Instruments):
The CPU spikes signify where objects are created and destroyed (there's 3 in each run). Notice how in the Debug build, the memory usage rises gradually during the busy loop, and then settles a little afterwards at a higher base level, this happens with each loop iteration. On the Release build it stays constant the whole time. At the end after 3 runs the memory usage level of the Debug build is significantly higher than that of the Release build. (The CPU spikes are offset on the time axis relative to each other but that's just because I pressed the button that triggers the loop at different times).
The inner loop code in question is very simple and basically consists of a bunch of correctly paired malloc and free statements as well as a bunch retain and release calls (courtesy of ARC, also verified as correctly paired).
Any idea what is causing this behaviour?
In Release builds ARC will do its best to keep objects out of the autorelease pool. It does this using objc_returnsRetainAutorelease and checking for it at runtime.
A lot of Cocoa-Touch classes use caching to improve performance. Memory amount used for caching data could vary depending on total memory, available memory and probably some other things. Since you compare results for Mac and device it is not strange that you receive different results.
Some examples of classes/methods that use caching:
+(UIImage *)imageNamed:(NSString *)name
Discussion
This method looks in the system caches for an image object with the specified name and
returns that object if it exists. If a matching image object is not
already in the cache, this method loads the image data from the
specified file, caches it, and then returns the resulting object.
NSURLCache
The NSURLCache class implements the caching of responses to
URL load requests by mapping NSURLRequest objects to
NSCachedURLResponse objects. It provides a composite in-memory and
on-disk cache
For one thing, the release builds optimize code and remove debugging information from the code. As a result, the application package is significantly smaller and to load it, less memory is necessary.
I suppose that most of the used memory in Debug builds is the actual debugging information, zombie tracking etc.
I've recently discovered the following in my code:
for (NSInteger i; i<x; i++){
...
}
Now, clearly, i should have been initialised. What I find strange is that while in "debug" profile (XCode), this error goes undetected and the for loop executes without issue. When the application is released using the "release" profile, a crash occurs.
What flags are responsible for letting this kind of mistake execute in "debug" profile?
Thanks in advance.
This could be considered a Heisenbug. A declaration without an initialization will typically allocate some space in the stack frame for the variable and if you read the variable you will get whatever happened to be at that location in memory. When compiled for the debug profile the storage for variables can shift around compared to release. It just happens that whatever is in that location in memory for debug mode does not cause a crash (probably a positive number) but when in release mode it is some value that causes a crash (probably a negative number).
The clang static analyser should detect this. I have the analyse when building option switched on always.
In the C language, using an initialized variable isn't an error but an Undefined Behavior.
Undefined behavior exists because C is designed to be a very efficient low-level language. Using an initialized variable is undefined behavior because it allows the compiler to optimize the variable allocation, as no default value is required.
But the compiler is licensed to do whatever he wants when an undefined behavior occurs. The C Standard FAQ says:
Anything at all can happen; the Standard imposes no requirements. The program may fail to compile, or it may execute incorrectly (either crashing or silently generating incorrect results), or it may fortuitously do exactly what the programmer intended.
So any implementation of an undefined behavior is valid (even if it produces code that formats your hard drive).
Xcode uses different optizations for Debug and Release configurations. Debug configuration has no optimization (-O0 flag) so the compiled executable must stays close to your code, allowing you to debug it more easily. On the other hand, Release configuration produces strongly optimized executables (-Os flag) because you want your application to run fast.
Due to that difference, undefined behaviours may (or may not) produce different results in Release and Debug configurations.
Though the LLVM compiler is quite verbose, it does not emit warnings by default for undefined behaviors. You may however run the static analyzer, which can detect that kind of issues.
More information about undefined behaviors and how they are handled by compilers in What Every Programmer Should Know About Undefined Behavior.
I doubt it is so much flags as the compiler is optimizing out the "unused" variable i. Release mode includes far more optimizations then debug mode.
Different compiler optimizations may or may not use a different memory location or register for you uninitialized variable. Different garbage (perhaps from previously used variables, computations or addresses used by your app) will be left in these different locations before you start using the variable.
The "responsibility" goes to not initializing the variable, as what garbage is left in what locations may not be visible to the compiler, especially in debug mode with most optimatizations off (e.g. you got "lucky" with the debug build).
i has not been initialized . You are just declaring the i variable not initializing the variable.
Writing just NSInteger i; just declares a variable not initializes it.
You can initialize the variable by below mentioned code.
for (NSInteger i=1; i<x; i++){
...
}
The glib documentation lacks many important things that I think API documentation absolutely should include. For instance the entry for g_malloc says nothing about that it will crash upon memory allocation failure (in direct contrast to the behaviour of the standard malloc, which the name implies that it mimics). Only if you happen to notice that there also is a variant named g_try_malloc and read its description you will be informed that g_try_malloc
Attempts to allocate n_bytes, and returns NULL on failure. Contrast
with g_malloc(), which aborts the program on failure.
Now for the question, glib have a function g_strdup which also does not mention anything about possibly returning NULL. I assume that it will not since it is implied that it will be using g_malloc internally. Will it?
The documentation does say it, though. Check the introductory section to the "Memory Allocation" page in the GLib manual:
If any call to allocate memory fails, the application is terminated. This also means that there is no need to check if the call succeeded.
This goes for any library call that allocates memory, and therefore for g_strdup() too.