Can you reuse an IOSurface that has been purged? - objective-c

TL;DR: Is an IOSurfaceRef a valid surface to write to after it has been purged and its state changed to kIOSurfacePurgeableEmpty?
I'm trying to get a better understanding of what it means for an IOSurface to be purged. The only documentation I have come across is in IOSurfaceRef.h and the only sample code I've come across is in WebKit.
I'm using the command line tool memory_pressure to simulate a critical memory pressure environment for 10 seconds like so:
> memory_pressure -S -s 10 -l critical
I've written a very simple application that allocates 100 IOSurfaces with identical properties. When I use Instruments to measure the memory allocations, I see VM: IOSurface at roughly 6GB, which is about 6MB for each surface. (4096x4096x4)
I then change the purgeable state of each IOSurface to kIOSurfacePurgeableVolatile and run the memory_pressure simulation.
Instruments still reports that I have 6GB of surfaces allocated. However, if I check the purgeable state of each surface, they are marked as kIOSurfacePurgeableEmpty.
So it looks like they were successfully purged, but the memory is still allocated to my application. Why is that and what condition are these surfaces in?
The header file states that I should assume they have "undefined content" in them. Fair enough.
But is the actual IOSurfaceRef or IOSurface * object still valid? I can successfully query all of its properties and I can successfully lock it for reading and writing.
Am I allowed to just reuse that object even though its contents were purged or do I have to discard that instance and create an entirely new IOSurface?
macos 10.14

Yes, it's still usable. It's just that the pixel data has been lost.
Basically, when the system is under memory pressure, it would normally page data out to disk. Marking a purgeable object volatile allows the system to simply discard that data, instead. The app has indicated that while it's nice-to-have, it's not has-to-have, and can be recreated if necessary.
When it wants to work with the IOSurface again, the app should mark the object nonvolatile and check the old state. If it was empty, then the app should recreate the data.
The reason that Instruments reports that your app still has 6GB allocated is because it has 6GB of its address space reserved for the IOSurfaces. But allocated does not necessarily mean backed by either physical RAM or swap file. It's just bookkeeping until the memory is actually used. Your app's resident set size (RSS) should shrink.

Related

Weird memory corruption issue, FreeRTOS, STM32F777II

I am currently working on an embedded firmware development which uses FreeRTOS running on an STM32F777II microcontroller. Resource wise, I have around 10 tasks (total sum of stack size will be under 40 KByte) at the same priority, around 4 queues of 1KByte each, 4 binary semaphores. I know this would be an incomplete question without posting the actual code, but I really do not have any specific portion in my firmware that I think will be worth sharing related to my issue. I have a ton of business logic in my code which I cannot fully share as well.
I have a struct which consists of multiple char and int arrays of a specific length. 4 of the tasks uses these structures each. Each structure consumes around 15KByte of space and is defined in the global space of the FreeRTOS environment, not local to a task. The structs are allocated statically only and not dynamically on runtime. And since I initialize few members of the struct when declaring, so they go to the .data section only if I am not mistaken. Until now, there had been absolutely no problem whatsoever in my code and it worked 100% without any issue at all. Now I recently had a requirement where I had to add the same stuct to 2 more tasks. So, I added this 15KByte stuct to one of my tasks, basically just allocated and initialized and did not do any processing in any of the tasks. Observed no problems, nothing, no data corruption, nothing. Now when I allocated one more struct variable of the same type only, what I observe is data corruption in a lot of other places in my project. Some of the queues stopped working correctly and showed garbage data when read. Some of the other buffers also showed data corruption. I am really not sure why just one more variable allocation of this struct is triggering a lot of data corruption at other places in my project. If I remove this one allocation, everything goes back to normal. My MCU has 512KB of RAM and as per the IDE's build analyzer feature, it showed below 40% RAM usage, so what is triggering this issue, any suggestions to try? Could be because of some overlapping of .data or .bss sections or something? I did not observe any stack overflows or hard faults in the system during this.
For a quick resolution,
I randomly just disabled the D-cache by commenting out the function:
SCB_EnableDCache();
and voila, everything started to function correctly as it should without any instances of data corruption.
For long run and correct resolution:
Looks like there are some latent issues with my coding. I need to review the memory use, and regions of memory with different properties. Look at the buses, review any DMA usage, and MPU memory settings. Also, review the correct usage of volatile memory directives, thread-safe operation, and cache-coherency issues. Also, Use memory fencing and cache flushing as appropriate.
More details: Level 1 cache on STM32F7 Series and STM32H7 Series

How to programmatically purge/clean cocoa application memory?

I'm working on a Mac app. Initially monitoring Xcode's memory report while I ran my app showed showed the memory was just ramping up crazy. I used Instruments and profiled my app for allocations and leaks. Turned out there wasn't much leaked memory as you would expect due to strong reference cycles etc. However there was a lot of abandoned memory. By following the stack trace that lead to my code I have fixed by 70% using autorelease pools etc. Still the remaining 30% of abandoned memory seems to point system calls.
Now I have two questions based on that I have two questions
1) I want to fix the remaining 30%. How can I get rid of abandoned memory? I have already used Instruments and know exactly where those system calls are spawned but still dont know what to do to have that memory be cleaned up. (using ARC no manual retain/ release and autorelease doesn't seem to make a diff.)
2) After I know whatever my application was doing has completed and there is no need for any memory to be there (just like the application first started) I want to get rid of all memory that my app has used up. This I plan to use as a brute force approach to clean up all memory just like the system would if the user closes the app or turns off the system.
Basically if I know where my apps memory is in the file system I'll just programmatically call purge command on that or something similar. Because at this point I'm 100% sure nothing needs to be in memory for the app except for the first screen that you would expect the first time you launch the app.
I read this, this, this and this but they weren't helpful.

Kernel - Scheduler : what happens when switching between process

Context:
I don't really understand how the kernel saves the state of a running code when it gets to exceed its time slice.
I don't visualize what happens actually.
Question:
1) Where is stored the current running code (and its stack ?) ?
2) When the kernel will "see" the code again, will it just follow an offset and keep going as if nothing happened ?
It is not clear to me.
Thanks
Current code instruction pointer and current stack pointer are stored in task_struct->ip and task_struct->sp (for x86) and new process's task_struct->ip and task_struct->sp and are loaded back to sp and ip registers when switch_to() is called in Linux kernel.
Kernel's switch_to() does many things like resetup of EIP, stack, FPU, segment descriptors, debug registers while switching to new process.
Then kernel's switch_mm() switch the virtual memory mappings from last process to new process.
It depends on the OS but as a general rule there is a block of storage which holds information about each process (usually called the Process Control Block or PCB). This information includes a pointer to the current line of code that is being executed and the contents of registers etc, so the process can start again where it stopped last time.
This block of information is owned by the OS itself not the process so it lives beyond the suspension of the process.
The program code itself is not stored in the PCB - it simply exists in memory or on disk. It can even be shared between processes, for example several processes may be running the same program, each at a different point in the code at any given time and each with their own set of 'variables' or data unique to that process's run of the program. All the OS needs is the variables and the line number or pointer to know where a particular process was in the code when it was suspended, and it can start from that point again.
It is worth noting that any RAM the process was using may or may not be still there when it restarts. In general an OS will try to leave recently used or frequently used RAM chunks (or 'pages') in memory if possible. If it needs to free up space, however, it may swap the 'page' out to disk, but disk access is much, much slower, hence the desire to avoid swapping out memory which is likely to be used again if possible.
In the worst case situation an OS may find it swaps out a process and then very soon the new process need to use some memory which has to be retrieved from disk. It is suspended while this happens as the retrieval take a long time in CPU terms. It may then happen that the next process also very soon finds itself in the same situation. The OS is now spending a lot of its time swapping processes and memory in and out and much less of its time doing real work - this is commonly called 'thrashing'.

why the available free space in the device is changing frequently in iphone sdk?

In my application I'm displaying available free space for my application in the device. The value is changing every time when I launching the application. Why is it happening like this.
Initially my available free space is 30,752,768 bytes and after allocating all the objects the available free memory is :2,809,856 bytes. after this I added some images to the view by using camera option then I received memory warning level = 1 and after that i get free memory space as 12,705,792 bytes. Why it is increased like this.
Can any one help me.
Memory warnings trigger a release of unneeded objects. This may include views that are not displayed and intermediary data that does not need to be in memory, especially stuff that can be recreated as needed – perhaps even pre-scaled images.

Memoryusage drops after a week

I have this app written in VB.Net with winforms that shows some stats and pictures on a bigscreen monitor. I also monitor the memory usage of sad app by using this.
Process.WorkingSet64
I know windows does not always report the correct usage but I just wanted to know if I didn't have any little memory leaks which I had but are solved now. But the first week the memory usage was around 100MB and the second week the memory usage showed around 50MB.
So why did it all of a sudden drop while still running the exact same code?
I can hardly imagine that the garbage collector kicked in this late since the app refreshes every 10 seconds and it has ample time in between those periods to do it's thing.
Or perhaps there is just better way to get memory usage for a process that is more reliable.
Process.WrokingSet64 doesn't report the memory usage, it omits the memory that is swapped to disk:
The value returned by this property represents the current size of working set memory used by the process. The working set of a process is the set of memory pages currently visible to the process in physical RAM memory. These pages are resident and available for an application to use without triggering a page fault. (MSDN)
Even if your system was never low on free memory, you may have minimized the application window, which caused Windows to trim its working set.
If you want to look for memory leaks you should probably use Process.PrivateMemorySize64 instead. Your shared memory is going to contain just executable code and it's going to remain more or less constant throughout the life of the process, so you should focus on the private memory.