Why manual memory management? - objective-c

Are there any plans for auto memory management?
What are the advantanges of manually managing memory...does it conserve memory in the long run?
I have noticed in .Net Windows Applications - they are very sluggish - is this partly due to the garbage collector not working correctly?

Are there any plans for auto memory management?
On Mac — There's garbage collection already on 10.5.
On iPhone — No (as of 4.0).
What are the advantanges of manually managing memory... does it conserve memory in the long run?
See When NOT to use garbage collection?

The advantages of manual memory management are mainly that you can specialize the memory management specifically for your application, making it optimal and allowing "easy" optimization (on size and speed).
Automatic memory management is helpful when it's not necessary and even C++ commitee aknowledge that (there are plans to add an optional garbage collector to C++) but sometimes you really need to control what's happening behind the scene because you have a bigger sight of view of the application than any compiler or garbage collector.
Having choice between both is certainly very powerfull but it's not available in most languages.

With respect to real-time systems, garbage collection can have negative effects on the responsiveness of the program. In their book Small Memory Software, Weir and Noble discuss some of these issues and you can read about it at the end of this section of their book.
In many cases programmers simply choose write their own memory management routines that address these issues.

Related

Heap profiling on ARM

I am developing a GUI-heavy C++ application on a Freescale MX51-based board Linux 2.6.35. I would like to perform heap profiling.
Unfortunately, all heap profiling tools I have found have either been too intrusive or ostensibly non-working on ARM. Specific tools I've tried:
Valgrind Massif: unworkable on my platform due to the platform's feeble CPU. The 80% CPU time overhead introduced by Massif causes a range of problems in my application that cannot be compensated for.
gperftools (formerly Google Performance Tools) tcmalloc: All features of this rather un-intrusive, library-based libc malloc() replacement work on my target except for the heap profiler. To rephrase, the thread caching allocator works but the profiler does not. I'll explain the failure mode of the profiler below for anyone curious.
Can anyone suggest a set of replacement tools for performing C++ heap profiling on ARM platforms? Ideal output would ultimately be a directed allocation graph, similar to what gperftools' tcmalloc outputs. Low resource utilization is a must- my platform is highly resource constrained.
Failure mode of gperftools' tcmalloc explained:
I'm providing this information only for those that are curious; I do not expect a response. I'm seeing something similar to gperftools' issue #407 below, except on ARM rather than x86.
Specifically, I always get the message "Hooked allocator frame not found, returning empty trace." I spent some time debugging the issue and it appears that, when dynamically linking the tcmalloc library, frame pointers at the boundary between my application and the dynamic library are null- the stack cannot be walked "above" the call into the dynamic library.
gperftools issue #407: https://github.com/gperftools/gperftools/issues/410
stackoverflow user seeing similar problems on ARM: Missing frames on shared libraries on ARM
Heaps. Many ways to do them, but I've only run across 3 main types that matter in embedded land:
Linked list heaps. Each alloc is tracked in a "used" list. Once freed, they are dropped into a "free" list. On freeing, adjacent blocks of free memory are "joined" into larger pieces. Allocs can be any size. Each alloc and free is a O(N) op as it has to traverse the free list to give you a piece of memory plus break the free block into a size close to what you asked for while leaving the remaining block in the free list. Because of the increasing overhead per alloc, this system cannot be used by itself on smaller systems. This also tends to cause memory fragmentation over time if steps aren't taken to minimize it.
Fixed size (unit) heaps. You break your heap into equal size (smaller) parts. This wastes memory a bit, depending on how big the chunks are (and how many different sized, fixed allocator heaps you create), but alloc and free are both O(1) time operations. No searching, no joining. This style is often combined with the first one for "small object allocations" as the engines I've worked with have 95% of their allocations below a set size (say 256 bytes). This way, you use the unit heap for small allocs for huge speed and only minimal memory loss, while using the list heap for larger allocs. No external fragmentation of memory either.
Relocatable memory heaps. You don't give out pointers to memory, but handles. That way, behind the scenes, you can change memory pointers when needed to remove fragmentation or whatever. High overhead. High pain the the #$$ quotient as it's easy to abuse and get dangling pointer all over. Also added overhead for each memory dereference. But wanted to mention it.
There's some basic patterns. You can find all sorts of libs out in the wild that use them and also have built in statistics for number of allocs, fragmentation, and other useful stats. It's also not the hard to roll your own really, though I'd not recommend it for anything outside of satisfying curiosity as debugging without a working malloc is painful indeed. Adding thread support is pretty straightforward as well, but again, downloading a ready made solution is the better choice.
The above info applies to all platforms, ARM or otherwise, though most of my experience has been on low level ARM stuff so the above info is battle tested for your platform. Hope this helps!

JVM garbage collection algorithm

I know there are different garbage collection algorithms. Those are Copy collection and Mark Compact collection, Incremental collection. I have a query now. Which algorithm is used in JVM? Why there are different algorithm available?
First off, there is more than one version of the JVM.
I believe most major JVM's are using a generational garbage collection by default. They may also use a hybrid strategy however.
Here are some links on major JVM's using generational garbage collection:
OJVM Generational collection
Hotspot JVM
Here is a great article I found that indicates Jrockit uses a marking strategy:
Comparison of three Major JVM's
Different garbage collectors have different strengths and weaknesses, important features are throughput, pause times and parallelization. Which garbage collectors are used or available depends on the JDK version, the JVM mode (client or server) and a ton of configuration settings you can use. Keep in mind that GC technology evolves. Here are some useful links:
The Garbage-First Garbage Collector
Java SE 6 Performance White Paper
Java Tuning White Paper
Java HotSpot VM Options
as jvm develops, more and more jvm algorithms appear to solve the lack of pre-one,
now in JDK5.0 there area four types clollector:serial ,throught,concurrent and train
collector

Points to be considered while designing or coding for lesser footprint deliverables

Please post the points one should keep in mind while designing or coding for lesser footprint deliverables for embedded systems.
I am not giving compiler or platform details, as I want generic information. But, any specific information on Linux based OS is also welcome.
Depends on how low you want to get. I'm currently coding for fiscal printers, and there's no OS, and the main rule is no dynamic memory allocation. The funny thing is that I still convinced the crew to code fully modern C++ ;).
Actually there are a few rules we decided upon:
no dynamic allocation
hence, no STL
no exception handling (obvious reasons)
There isn't a general answer, only ones specific to language/platform ... but
Small memory footprint ...
Don't use Java, C#/mono, PHP, Perl, Python or anything with garbage collection
Get as close to the metal as feasible, Use C
Do alot of profiling to see where memory is getting allocated, if you are using dynamic allocation
Ensure you prevent heap-fragmentation by allocating sensible chunks and sizes of the heap
Avoid recursive functions especially those that use malloc(). Better allocating a chunk and passing a pointer around.
use free() ;)
Ensure your types are no bigger than required
Turn on compiler optimizations
There will be more.
for real low footprint consider doing Assembly directly.
We all know that Hello World in C or C++ is 20kb+(because of all the default libraries which get linked). In Assembly this overhead is gone. As pointed out in the comments one can reduce the standard libraries quite a bit. However, the fact remains that the code density you can get when coding assembly is much higher than a compiler will generate from a higher language. So for code where every byte matters, use assembly.
also when programming on devices with less capable processors, programming in assembly language might be your only way to do make the program fast enough for it to be realtime enough to (for instance) control machines
When faced with such constraints, it is advisable to pre-allocate memory in order to guarantee that the system will work under load. A design pattern such as "object pooling" can be used to share resources within the system.
The C language enables tight resource (i.e. memory & compute cycles) control. It should be strongly considered.
Avoid recursion as it is easy to abuse and can result in stack overflow conditions.

Why do Cocoa apps use so much memory?

Even the standard blank-window Cocoa app that gets built when you make a new Cocoa project in Xcode uses almost 6 MB of memory. What's the reason for this? Is it possible to make an app use less, or does OS X simply manage memory differently for Cocoa apps?
Not that I'm complaining. I know that performance "hardly matters anymore" (edit: what I mean is, it matters less than readability/maintainability/the programmer's time). I'm just curious.
OS X does a lot of magic with shared memory and copy-on-write pages, so chances are that it doesn't take that much physical RAM for every application.
You can check exactly how memory blocks are mapped by running:
sudo vmmap <PID of the process>
Depends on the all the framework (APIs) you use. Combine that with the VM allocations done by low level ops.
It's only worth trying to reduce the heap alloc (total), as well as the resident size of the code. Making sure your data structs are allocated efficiently and trying to compile with the ever-so-famous "-Os" optimization flag (size optimization). There isn't much you can do about the VM eaten by Cocoa. I wouldn't really worry about it.
This is clearly a 'WTF' moment for developers in general. The question is usually - why does my trivial application use up so much memory.
The answer is down to the underlying framework. You could argue that 6MB is too much, but really, it is nothing.
It's not rare to see computers come with 2GB of memory these days. The stock IMAC is 4GB. The whole point of the computer industry is to use up all the resources a machine has so that it continues to evolve.
Yes you should avoid ineffecincies where possible (Don't load up a 5million point array at start up for instance). But unless your beta demonstrates you fudged up just keep it on the list of todo's.
I'm a bit out on a limb here, but I guess it's because all the libraries that get added have to do quite a bit of setting up and there is no need to garbage collect, so they simply get to waste memory; plus, even if all memory got autoreleased, it would wait until the first idle event, which is after the creation of the window. Delete unneeded libraries/frameworks, or force a garbage collect somewhere after loading the window from the nib and see how much it goes down if you're so concerned.
I am not concerned about it. Some of the memory might be returned later, and the rest is the price you pay for a powerful framework.
A factor which is not directly linked to cocoa but is valid to frameworks in general is that the overhead is not linear. There is usually a fixed and a variable "price", in terms of overhead, to use the framework.
When you create a simple blank window, the fixed overhead is crushing, but when you create an application with tens of windows, dialogs, controls and all, the initial fixed overhead becomes negligible, against the size of the application itself.

What is the memory footprint for .NET Framework Compact Edition?

What is the memory footprint for .NET Framework Compact Edition?
Thanks.
According to this wikipedia page, it's about 12MB
But then again, this page says it'll run in 128KB to 1MB.
My guess is that it's going to vary based on how much memory you have available and it'll swap pieces in and out of memory depending on circumstances. Quoting from the second link:
Random access memory (RAM) is used to store dynamic data structures and JIT-compiled code. The .NET Compact Framework uses available RAM, up to a limit specified by the device, to cache generated code and data structures and then frees the memory when appropriate.
The common language runtime uses a code-pitching technique to free blocks of JIT-compiled code at run time when memory is low. This enables larger programs to run on RAM-constrained systems with minimal performance penalty.
Although this article is not about the compact framework (it's about the micro version), it shows a comparison between the Micro and Compact frameworks, noting that the .NET Compact Framework has a memory footprint of 12 MB.