Release backing layer for a view to reclaim memory? - cocoa-touch

I have a custom tab widget, with a lot of views whose backing CALayer objects are taking up too much memory. I'm looking at releasing views for background tabs, but it would be simpler if I could just ask the framework to release the backing CALayer (which is where most of the memory is going) and have it re-create it on demand. Is that possible?
Does a nested view hierarchy consume more memory than a flatter one, because there are more CALayer objects, with mostly the same pixels? If a 100 x 100 view takes X memory, does it mean that a 100 x 100 view with a 100x100 subview takes roughly 2X?
Why didn't Apple go with the AppKit model where the programmer controls which views have backing Core Animation layers? That would consume a lot less memory, which is scarce on iOS compared to OS X. Thanks.

All views are layer-backed on iOS and you have no control over this.
You should just release the inactive views and reload them as necessary.
The reason views are layer-backed on iOS is so that the GPU does the majority of the heavy lifting. This massively reduces the CPU load so that the CPU can be used for real work or be throttled down to save power.

Related

How to improve MTKView rendering when using MPSImageScale and MTLBlitCommandEncoder

TL;DR: From within my MTKView's delegate drawInMTKView: method, part of my rendering pass involves adding an MPSImageBilinearScale performance shader and zero or more MTLBlitCommandEncoder requests for generateMipmapsForTexture. Is that a smart thing to do from within drawInMTKView:, which happens on the main thread? Do either of them block the main thread while running or are they only being encoded and then executed later and entirely on the GPU?
Longer Version:
I'm playing around with Metal within the context of an imaging application. I use Core Image to load an image and apply filters. The output image is displayed as a 2D plane in a metal view with a single texture. This works, but to improve performance I wanted to experiment with Core Image's ability to render out smaller tiles at a time. Each tile is rendered into its own IOSurface.
On each render pass, I check if there are any tiles that have been recently rendered. For each rendered tile (which is now an IOSurface), I create a Metal texture from a CVMetalTextureCache that is backed by the surface.
I think use a scaling MPS to copy from the tile-texture into the "master" texture. If a tile was copied over, then I issue a blit command to generate the mipmaps on the master texture.
What I'm seeing is that if my master texture is quite large, then generate the mipmaps can take "a bit of time". The same is true if I have a lot of tiles. It appears this is blocking the main thread because my FPS drops significantly. (The MTKView is running at the standard 60fps.)
If I play around with tile sizes, then I can improve performance in some areas but decrease it in others. For example, increasing the tile size that Core Image renders it creates less tiles, and thus less calls to generate mipmaps and blits, but at the cost of Core Image taking longer to render a region.
If I decrease the size of my "master" texture, then mipmap generation goes faster since only the dirty textures are updates, but there appears to be a lower bounds on how small I should make the master texture because if I make it too small, then I need to pass in a large number of textures to the fragment shader. (And it looks like that limit might be 128?)
What's not entirely clear to me is how much of this I can move off the main thread while still using MTKView. If part of the rendering pass is going to block the main thread, then I'd prefer to move it to a background through so that UI elements (like sliders and checkboxes) remain fully responsive.
Or maybe this isn't the right strategy in the first place? Is there a better way to display really large images in Metal other than tiling? (i.e.: Images larger than Metal's texture size limit of 16384?)

Metal drawable musings... ugh

I have two issues in my Metal App.
My call to currentPassDescriptor is stalling. I have too many drawables, apparently.
I'm wholly confused on how to most performantly configure the multiple MTKViews I am using.
Issue (1)
I have a problem with currentPassDescriptor in my app. It is occasionally blocking (for 1.00s) which, according to the docs, is because there is no currentDrawable available.
Background: I have 4 HD 1920x1080 videos playing concurrently, tiled out onto a 3840x2160 second external display as a debugging configuration. The pixel buffers of these AVPlayer instances are captured by 4 independent CVDIsplayLink callbacks and, from within the callback, there is the draw call to its assigned MTKView. A total of 4 MTKViews are subviews tiled on a single NSWindow, and are configured for manual drawing.
I'm using CVDisplayLink callbacks manually. If I don't, then I get stutter when mousing up on the app’s menus, for example.
Within each draw call, I do a bit of kernel shader work then attempt to obtain the currentPassDescriptor. If successful, I do one pass of a fragment/vertex shader and then present the drawable. My code flow follows Apple’s sample code as well as published examples.
According to the Metal System Trace, most of draw calls take under 5ms. The GPU is about 20-25% utilized and there’s about 25% of the GPU memory free. I can also cause the main thread to usleep() for 1 second without any hiccups.
Without any user interaction, there’s about a 5% chance of the videos stalling out in the first minute. If there’s some UI work going then I see that as windowServer work in Instruments. I also note that AVFoundation seems to cache about 15 frames of video onto the GPU for each AVPlayer.
If the cadence of the draw calls is upset, there's about a 10% chance that things stall completely or some of the videos -- some will completely stall, some will stall with 1hz updates, some won't stall at all. There's also less chance of stalling when running Metal System Trace. The movies that have stalled seem to have done so on obtaining a currentPassDescriptor.
This is really a poor design to have this currentPassDescriptor block for ≈1s during a render loop. So much so that I’m thinking of eschewing the MTKView all together and just drawing to a CAMetalLayer myself. But the docs on CAMetalLayer seem to indicate the same blocking behaviour will occur.
I also grab these 4 pixel buffers on the fly and render sub-size regions-of-interest to 4 smaller MTKViews on the main monitor; but the stutters still occur if this code is removed.
Is the drawable buffer limit per MTKView or per the backing CALayer? The docs for maximumDrawableCount on CAMetalLayer say the number needs to be 2 or 3. This question ties into the configuration of the views.
Issue (2)
My current setup is a 3840x2160 NSWindow with a single content view. This subclass of NSView does some hiding/revealing of the mouse cursor by introducing an NSTrackingRectTag. The MTKViews are tiled subviews on this content view.
Is this the best configuration? Namely, one NSWindow with tiled MTKViews… or should I do one MTKView per window?
I'm also not sure how to best configure these windows/layers — ie. by setting (or clearing) wantsLayer, wantsUpdateLayer, and/or canDrawSubviewsIntoLayer. I'm currently just setting wantsLayer to YES on the single content view. Any hints on this would be great.
Does adjusting these properties collapse all the available drawables to the backing layer only; are there still 2 or 3 per MTKView?
NB: I've attached a sample run of my Metal app. The longest 'work' on the top graph is just under 5ms. The clumps of green/blue are rendering on the 4 MTKViews. The 'work' alternates a bit because one of the videos is a 60fps source; the others are all 30fps.

iPad iOS memory management - how to free up "Real Memory" used by UIImageViews, UIScrollViews?

I'm having memory issues with one of my apps and I've identified "Real memory" as defined in Instruments> Activity monitor as a possible culprit.
My app allocates large UIImages within UIScrollViews. There's a CIImageFilter applied to one of the images. Activity monitor shows that upon the first pushing of the view controller containing scrollviews with large images, the real memory use jumps to around 300mb. Subsequent pushes/pops raise it to about 500mb:
I read that "Live Bytes" does not count memory used by textures and CALayers, so my question is: How do I properly release memory that is used by CALayers of my Image/Scrollviews?
See the real memory usage blue pie chart on the right:
Both real and virtual memory are the highest for this process:
What bothers me is that I'm trying to clean up my large scrollviews and images when popping that controller, and the numbers for the "Live bytes" go down to about 5mb, while "Real Memory" stays outrageously high(~500 mb):
ContainerScrollView* container = ...;
[container.view removeFromSuperview];
container.view = nil;
Here's the allocations profiling:
I found a person experiencing a similar issue here:
Mysterious CoreImage memory leak using ARC
The answer (I really hope it is) appears to start using NSData dataWithContentsOfFile: and then create a UIImage imageWithData:. Got an image a user picked? Write it to a temporary file and read it back. I do not trust any other image methods, as they, in my 12 hours of testing appear to behave irrationally in iOS 6.1.2 for large image views.

Objective C iPad Animation with large images - What method to use?

I'm trying to build a weather application on the iPad but it seems that I need some help in animation. Say I'm animating a Radar, so the radar source files have 10 gif/jpeg pictures in 900x700 pixel size. I've tried the UIImage animation technique using the tutorial here:
http://www.icodeblog.com/2009/07/24/iphone-programming-tutorial-animating-a-game-sprite/
but it seems that loading 10 images that big is too much for the iPad to handle and its crashing due to memory warnings. I'm researching other techniques to animate but I can't seem to find something that will do this efficiently.
I've looked at others like Core Animation using sprites, and Cocos2D with sprites. Can someone point in the right direction the best way to animate these big images? (keep in mind that these images are dynamic and changes often so the sprites will have to be recreated on a server and fetched from the iPad to do the animation). Thanks
OpenGL only creates textures with dimensions at powers of 2. In the case of your images, that's 1024x1024, which is a meg of memory per image. Still, that shouldn't be a problem with the iPad.
First, investigate using Xcode's profiling tools to ensure the images aren't being repeatedly loaded into memory at each loop of the animation (likely by way of new objects that aren't sharing cached textures). That could solve your problem from the start.
Second, I recommend using Cocos2D if only for the easy handling of textures and caching. Toss the images into a CCAnimation, pop that into a CCRepeatForever, run it with a CCSequence. When you're done hit CCTextureCache to release unused textures.
Third, lower your animation framerate to 30 or less (if only for this animation). It may be the iPad, but you making a weather app. Not a video game.
Finally, downgrade the size of your image. Justify all you want, but a large radar animation will not sell your app. And just because a website might already be playing that animation beautifully, remember that a desktop has vastly more memory and power than any smart phone.
Try breaking the animation image into into smaller parts and animate those instead by treating each components as sprites. It would be best if you use primarily code (CoreGraphics) and draw your radar "by hand" instead of just using images as if they were animated GIFs.

Using Core Animation/CALayer for simple layered painting

I would like to create a custom NSView that takes a layered approach to painting. I imagine the majority of the layers would be the same width and height as the backing view.
Is it appropriate to use the Core Animation classes like CALayer for this task, even though I don't expect to need much animation? Is there a more appropriate approach?
To clarify, the view is not meant to be like a canvas in a Photoshop-like application. It more of a data display that should allow for user interaction (selecting, moving, scrolling, etc.)
If it's display and layout you're after, I'd say that a CALayer-based architecture is a good choice. For the open source Core Plot framework, we construct all of our graphs and plot elements out of CALayers, and organize them in a regular hierarchy. CALayers are lightweight and use almost identical APIs between Mac and iPhone. They can even be made to respond to touch or mouse events.
For another example of a CALayer-based user interface, my iPhone application's entire equation entry interface is composed of CALayers, including the menu that slides up from below. Performance is slightly better than that of my previous UIView-based implementation, but the same code also works within my preliminary desktop version of the application.
For a drawing program, I would imagine it would be important to hold a buffer of the bitmap data. The only issue with using a CALayer is that the contents property is a CGImageRef. To turn that back into a graphics context for doing further drawing can be a bit of a pain. You'd have to initialize a new context, draw the bitmap data into it, then do whatever drawing operations you wanted to do, and finally turn that back into a CGImageRef. You probably wouldn't be able to avoid doing a number of pretty large memory allocations, which is virtually guaranteed to slow your program way down.
I would consider holding an off-screen buffer for each layer. Take a look at the Quartz CGLayerRef object. I think it probably does what you want to do: it's an off-screen buffer that holds things you might want to draw repeatedly. You can also quickly get a CGContextRef whenever you need it so you can do additional drawing. And you can always use that CGContextRef with NSGraphicsContext if you want to use Cocoa drawing methods.