Draw several UIViews on one layer - objective-c

Is it possible to draw several UIViews with custom drawing on single CALayer so that they dont have backing store each?
UPDATE:
I have several uiviews of the same size which have same superview. Right now each of them has custom drawing. And because of big size they create 600-800 mb backing stores on iPad 3. so i want to compose their output on one view and have several times less memory consumed.

Every view has it's own Layer and you can't change that.
You could enable shouldRasterize to flatten a view hierarchy, which might help in some cases, but that needs gpu memory.
another way could be to create an image context and merge the drawings into the image and set that as the layer content.
in one of the wwdc session videos of last year which was about drawing showed a drawing app where many strokes were transfered into a image to speed up drawing.

Since the views will share the same backing store, I assume you want them to share the same image that results from the layer's custom drawing, right? I believe this can be done with something similar to:
// create your custom layer
MyCustomLayer* layer = [[MyCustomLayer alloc] init];
// create the custom views
UIView* view1 = [[UIView alloc] initWithFrame:CGRectMake( 0, 0, layer.frame.size.width, layer.frame.size.height)];
UIView* view2 = [[UIView alloc] initWithFrame:CGRectMake( 100, 100, layer.frame.size.width, layer.frame.size.height)];
// have the layer render itself into an image context
UIGraphicsBeginImageContext( layer.frame.size );
CGContextRef context = UIGraphicsGetCurrentContext();
[layer drawInContext:context];
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// set the backing stores (a.k.a the 'contents' property) of the view layers to the resulting image
view1.layer.contents = (id)image.CGImage;
view2.layer.contents = (id)image.CGImage;
// assuming we're in a view controller, all those views to the hierarchy
[self.view addSubview:view1];
[self.view addSubview:view2];

Related

CAMetalLayer with texture rendered by CARenderer is not visible?

I'm using a CARenderer to render another CALayer tree into a CAMetalLayer, which I hope to use as the mask of yet another layer. For testing purposes, I've tried adding the CAMetalLayer as a normal sublayer instead of a mask.
The layer object below is not visible after adding it to a superlayer that is definitely visible. I've confirmed the frame of the layer is not a problem. Here's how I'm making the CAMetalLayer and its CARenderer.
CAMetalLayer *layer = [CAMetalLayer layer];
layer.frame = bounds;
layer.device = MTLCreateSystemDefaultDevice();
//layer.opaque = NO;
//layer.framebufferOnly = NO;
id<CAMetalDrawable> drawable = layer.nextDrawable;
_lastDrawable = drawable;
_renderer = [CARenderer rendererWithMTLTexture:drawable.texture options:nil];
_renderer.layer = self.superview.layer;
_renderer.bounds = bounds;
👇 By creating a CIImage and inspecting it with the debugger, I've confirmed the CARenderer is updating the Metal texture.
CIImage *img = [CIImage imageWithMTLTexture:_lastDrawable.texture options:nil];
But when I set the superlayer of the CAMetalLayer, it's nowhere to be seen.
[self.layer addSublayer:layer];
Here's how I'm using the CARenderer:
[_renderer beginFrameAtTime:CACurrentMediaTime() timeStamp:NULL];
[_renderer addUpdateRect:bounds];
[_renderer render];
[_renderer endFrame];
That last snippet runs frequently.
edit 1
I've added a backgroundColor and now the layer is visible, but its texture is not being rendered inside it.
layer.backgroundColor = NSColor.yellowColor.CGColor;
I would recommend just setting the original layer as a mask rather than trying to render it to a texture first; you’re sort of duplicating the work that CA would be doing anyway.
If you really need control over when the mask layer tree gets rendered—and again you should definitely try the standard method first—the right way to do this would be to create an IOSurface-backed MTLTexture rather than using a CAMetalLayer’s drawable, draw into the texture with your CARenderer, set the IOSurface as the contents of a regular CALayer, and use that layer as the mask.

UIImageView autoresizingmask not working in certain cases

I am experimenting with a block-breaking iOS app to learn more about UI features. Currently, I am having issues trying to make it work for screen rotation.
I am able to get the blocks to re-arrange properly after screen rotation but am having trouble with getting the UIImageView for the paddle re-arrange.
My code is split as follows, VC calls initializes an object of the BlockModel class. This object stores a CGRect property (which is the CGRect corresponding to the paddle's ImageView).
The VC then creates an imageView initialized with the paddle image, sets the autoresinging property on the image view (to have flexible external masks), sets the frame based on the CGRect in the model object and adds the imageView as a sub-view of the main view being handled by the VC.
The code is below.
When I rotate, I am seeing that the ImageView is not being automatically repositioned.
If I do all the image view and CGRect creation in the vC, then it works (code sample 2).
Is this expected behavior? If yes, why is autoresizing not kicking in if the CGRect is obtained from a property in another object?
Full Xcode project code is here (github link)
EDIT
Looks like things don't work if I store the imageView as a property. I was doing this to have quick access to it. Why doesn't it work if imageView is stored as a property?
Code where model is initialized
self.myModel = [[BlockerModel alloc] initWithScreenWidth:self.view.bounds.size.width andHeight:self.view.bounds.size.height];
Model initialization code
-(instancetype) initWithScreenWidth:(CGFloat)width andHeight:(CGFloat)height
{
self = [super init];
if (self)
{
self.screenWidth = width;
self.screenHeight = height;
UIImage* paddleImage = [UIImage imageNamed:#"paddle.png"];
CGSize paddleSize = [paddleImage size];
self.paddleRect = CGRectMake((self.screenWidth-paddleSize.width)/2, (1 - PADDLE_BOTTOM_OFFSET)*self.screenHeight, paddleSize.width, paddleSize.height);
}
return self;
}
Code in VC where imageView is initialized
self.paddleView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"paddle"]];
self.paddleView.backgroundColor = [UIColor clearColor];
self.paddleView.opaque = NO;
self.paddleView.autoresizingMask = UIViewAutoresizingFlexibleTopMargin|UIViewAutoresizingFlexibleLeftMargin|UIViewAutoresizingFlexibleRightMargin|UIViewAutoresizingFlexibleBottomMargin;
NSLog(#"Paddle rect is %#",NSStringFromCGRect(self.myModel.paddleRect));
[self.paddleView setFrame:self.myModel.paddleRect];
[self.view addSubview:self.paddleView];
If I instead use this code in the VC to initialize imageView things work
UIImage* paddleImage = [UIImage imageNamed:#"paddle.png"];
CGSize paddleSize = [paddleImage size];
CGRect paddleRect = CGRectMake((self.view.bounds.size.width-paddleSize.width)/2, (1 - PADDLE_BOTTOM_OFFSET)*self.view.bounds.size.height, paddleSize.width, paddleSize.height);
UIImageView *paddleView = [[UIImageView alloc] initWithImage:paddleImage];
paddleView.backgroundColor = [UIColor clearColor];
paddleView.opaque = NO;
paddleView.autoresizingMask = UIViewAutoresizingFlexibleTopMargin|UIViewAutoresizingFlexibleLeftMargin|UIViewAutoresizingFlexibleRightMargin|UIViewAutoresizingFlexibleBottomMargin;
[paddleView setFrame:paddleRect];
[self.view addSubview:paddleView];
Found the issue. I was using the model object to handle all my "game object location" logic. E.g VC would calculate the X axis deltas from the touch events & forward them to the model object. Also, CADisplayLink events would be forwarded so that model can update ball location based on velocity and time since last event. It will then use updated location to detect collisions. This split was used because model class also had the methods to detect collisions with sides, paddle/ball etc.
The issue was that the model object was rewriting the CGRect of the paddleView by adding the delta it received from VC to the origin.x of current paddleRect it had stored. This paddleRect did not take into account the automatic adjustment to the CGRect that is done by auto-resizing after a rotation.
The fix was for the VC to set the CGRect of the paddleRect (set to paddleView frame) before calling the method in the model to update all the game properties and detect collisions. This way model only takes care of logic of collusion detection and updating ball movement and velocity based on it. The VC uses current paddleView location and hence automatically accounts for the automatic adjustment to the CGRect that is done by auto-resizing after a rotation.
Source code in github link updated.

How to interact with layer-backed views on the Mac

I am designing a user interface containing several labels and text fields. I would like to style the UI like this:
setting a background pattern for the content view of my NSWindow
adding a custom icon to the background in the upper left corner
I solved the first problem by making the content view a layer-backed view as described in Apple's documentation of NSView:
A layer-backed view is a view that is backed by a Core Animation layer. Any drawing done by the view is the cached in the backing layer. You configured a layer-backed view by simply invoking setWantsLayer: with a value of YES. The view class will automatically create the a backing layer for you, and you use the view class’s drawing mechanisms. When using layer-backed views you should never interact directly with the layer.
A layer-hosting view is a view that contains a Core Animation layer that you intend to manipulate directly. You create a layer-hosting view by instantiating an instance of a Core Animation layer class and setting that layer using the view’s setLayer: method. After doing so, you then invoke setWantsLayer: with a value of YES. When using a layer-hosting view you should not rely on the view for drawing, nor should you add subviews to the layer-hosting view.
and then generating a CGColorRef out of a CGPattern which draws my CGImage:
NSView *mainView = [[self window]contentView];
[mainView setWantsLayer:YES];
To set the background image as a pattern I used the answer from How to tile the contents of a CALayer here on SO to get the first task done.
However for the second task, adding the icon I used the code below:
CGImageRef iconImage = NULL;
NSString *path = [[NSBundle mainBundle] pathForResource:#"icon_128" ofType:#"png"];
if(path != nil) {
NSURL *imageURL = [NSURL fileURLWithPath:path];
provider = CGDataProviderCreateWithURL((CFURLRef)imageURL);
iconImage = CGImageCreateWithPNGDataProvider(provider,NULL,FALSE,kCGRenderingIntentDefault);
CFRelease(provider);
}
CALayer *iconLayer = [[CALayer alloc] init];
// layer is the mainView's layer
CGRect layerFrame = layer.frame;
CGFloat iconWidth = 128.f;
iconLayer.frame = CGRectMake(0.f, CGRectGetHeight(layerFrame)-iconWidth, 128.f, 128.f);
iconLayer.contents = (id)iconImage;
CGImageRelease(iconImage);
[layer insertSublayer:iconLayer atIndex:0];
[iconLayer release];
The Questions
I am not sure if I am violating Apple's restrictions concerning layer-backed views that you should never interact directly with the layer. When setting the layer's background color I am interacting directly with the layer or am I mistaken here?
I have a bad feeling about interacting with the layer hierarchy of a layer-backed view directly and inserting a new layer like I did for my second task. Is this possible or also violating Apple's guidelines? I want to point out that this content view of course has several subviews such as labels, a text view and buttons.
It seems to me that just using one single layer-hosting NSView seems to be the cleanest solution. All the text labels could then be added as CATextLayers etc. However if I understand Apple's documentation correctly I cannot add any controls to the view anymore. Would I have to code all the controls myself in custom CALayers to get it working? Sounds like reinventing the wheel de luxe. I also have no idea how one would code a NSTextField solely in CoreAnimation.
Any advice on how split designing user interfaces with CoreAnimation and standard controls is appreciated.
Please note that I am talking about the Mac here.
no layer backing needed IMHO:
for 1. I do a pattern image
NSImage *patternImage = [NSImage imageNamed:#"pattern"];
[window setBackgroungdColor:[NSColor colorWithPatternImage:patternImage]];
for 2. add an NSImageView as a subview of the contentview
NSImageView *v = ...
[[window contentView] addSubview:v];
on mac some views dont respond nicely IF layer backed
:: e.g. pdfview
Make a superview container A. Add a subview B to A for all your NSView needs (buttons, etc.). Add a subview C to A for all your Core Animation needs.
Edit:
Even better: use superview A for all your NSView needs and one subview C for your Core Animation needs, ignoring view B altogether.

adding UIImageView in UIScrollview with large image blocks thread

I am loading an image in a UIImageView which I then add to a UIScrollView.
The image is a local image and is about 5000 pixels in height.
The problem is that when I add the UIImageView to the UIScrollView the thread is blocked.
It is obvious because when I do this I can not scroll the UIScrollView till the image is displayed.
Here's the example.
UIScrollView *myscrollview = [[UIScrollView alloc] initWithFrame:CGRectMake(0, 0, 768, 1004)];
myscrollview.contentSize = CGSizeMake(7680, 1004);
myscrollview.pagingEnabled = TRUE;
[self.view addSubview:myscrollview];
NSString* str = [[NSBundle mainBundle] pathForResource:#"APPS.jpg" ofType:nil inDirectory:#""];
NSData *imageData = [NSData dataWithContentsOfFile:str];
UIImageView *singleImageView = [[UIImageView alloc] initWithImage:[UIImage imageWithData:imageData]];
//the line below is the blocking line
[scrollView addSubview:singleImageView];
It is the last line in the script that blocks the scroller. When I leave it out everything works perfect, except for the fact the image is not showing of course.
I seem to recall that using multithreading does not work on UIView operations so I guess that's out of the question ass well.
Thanks for your kind help.
If you're providing these large images, you should maybe check out CATiledLayer; there's a video of a good presentation on how to use this from WWDC 2010.
If these aren't your images, and you can't downsample them or break them into tiles, you can draw the image on a background thread. You may not draw to the screen graphics context on any main thread but the main thread, but that doesn't prevent you from drawing to a non-screen graphics context. On your background thread you can
create a drawing context with CGBitmapContextCreate
draw your image on it just as you would draw onto the screen in drawRect:
when you're done loading and drawing the image invoke your view's drawRect: method on the main thread using performSelectorOnMainThread:withObject:waitUntilDone:
In your view's drawRect: method, once you've fully drawn your image on the in-memory context, copy it to the screen using CGBitmapContextCreateImage and CGContextDrawImage.
This isn't trivial, you'll need to start your background thread at the right time, synchronize access to your images, etc. The CATiledLayer approach is almost certainly the better one if you can find a way to manipulate the images to make that work.
Why you want to load such huge image to memory at one time? split it into many small images and load/free it dynamically.
Try allocating the UIImageView without a UIImage, and add it as a sub-view to the UIScrollView.
Load the UIImage in a separate thread, and get the method running on the other thread to set the image property of the UIImageView when the image is loaded into memory. Also you'll probably encounter some memory problems as an image of this size loaded into a UIImage will probably be 30MB+
UIScrollView *myscrollview = [[UIScrollView alloc] initWithFrame:CGRectMake(0, 0, 768, 1004)];
myscrollview.contentSize = CGSizeMake(7680, 1004);
myscrollview.pagingEnabled = TRUE;
[self.view addSubview:myscrollview];
NSString* str = [[NSBundle mainBundle] pathForResource:#"APPS.jpg" ofType:nil inDirectory:#""];
UIImageView *singleImageView = [[UIImageView alloc] init];
[scrollView addSubview:singleImageView];
//Then fire off a method on another thread to load the UIImage and set the image
//property of the UIImageView.
Just keep an eye on memory and beware of using convenience constructors with UIImage (or any object that could end up being huge)
Also where are you currently running this code?

How do I render a view which contains Core Animation layers to a bitmap?

I am using an NSView to host several Core Animation CALayer objects. What I want to be able to do is grab a snapshot of the view's current state as a bitmap image.
This is relatively simple with a normal NSView using something like this:
void ClearBitmapImageRep(NSBitmapImageRep* bitmap) {
unsigned char* bitmapData = [bitmap bitmapData];
if (bitmapData != NULL)
bzero(bitmapData, [bitmap bytesPerRow] * [bitmap pixelsHigh]);
}
#implementation NSView (Additions)
- (NSBitmapImageRep*)bitmapImageRepInRect:(NSRect)rect
{
NSBitmapImageRep* imageRep = [self bitmapImageRepForCachingDisplayInRect:rect];
ClearBitmapImageRep(imageRep);
[self cacheDisplayInRect:rect toBitmapImageRep:imageRep];
return imageRep;
}
#end
However, when I use this code, the Core Animation layers are not rendered.
I have investigated CARenderer, as it appears to do what I need, however I cannot get it to render my existing layer tree. I tried the following:
NSOpenGLPixelFormatAttribute att[] =
{
NSOpenGLPFAWindow,
NSOpenGLPFADoubleBuffer,
NSOpenGLPFAColorSize, 24,
NSOpenGLPFAAlphaSize, 8,
NSOpenGLPFADepthSize, 24,
NSOpenGLPFANoRecovery,
NSOpenGLPFAAccelerated,
0
};
NSOpenGLPixelFormat *pixelFormat = [[NSOpenGLPixelFormat alloc] initWithAttributes:att];
NSOpenGLView* openGLView = [[NSOpenGLView alloc] initWithFrame:[self frame] pixelFormat:pixelFormat];
NSOpenGLContext* oglctx = [openGLView openGLContext];
CARenderer* renderer = [CARenderer rendererWithCGLContext:[oglctx CGLContextObj] options:nil];
renderer.layer = myContentLayer;
[renderer render];
NSBitmapImageRep* bitmap = [oglView bitmapImageRepInRect:[oglView bounds]];
However, when I do this I get an exception:
CAContextInvalidLayer -- layer <CALayer: 0x1092ea0> is already attached to a context
I'm guessing that this must be because the layer tree is hosted in my NSView and therefore attached to its context. I don't understand how I can detach the layer tree from the NSView in order to render it to a bitmap, and it's non-trivial in this case to create a duplicate layer tree.
Is there some other way to get the CALayers to render to a bitmap? I can't find any sample code anywhere for doing this, in fact I can't find any sample code for CARenderer at all.
There is a great post on "Cocoa is my girlfriend" about recording Core Animations. The author captures the whole animation into a movie, but you could use the part where he grabs a single frame.
Jump to the "Obtaining the Current Frame" section in this article:
http://www.cimgf.com/2009/02/03/record-your-core-animation-animation/
The basic idea is:
Create a CGContext
Use CALayer's renderInContext:
Create a NSBitmapImageRep from the
context (using
CGBitmapContextCreateImage and
NSBitmapImageRep's initWithCGImage)
Update:
I just read, that the renderInContext: method does not support all kind of layers in Mac OS X 10.5.
It does not work for the following layers classes:
QCCompositionLayer
CAOpenGLLayer
QTMovieLayer
If you want sample code for how to render a CALayer hierarchy to an NSImage (or UIImage for the iPhone), you can look at the Core Plot framework's CPLayer and its -imageOfLayer method. We actually created a rendering pathway that is independent of the normal -renderInContext: process used by CALayer, because the normal way does not preserve vector elements when generating PDF representations of layers. That's why you'll see the -recursivelyRenderInContext: method in this code.
However, this won't help you if you are trying to capture from any of the layer types mentioned by weichsel (QCCompositionLayer, CAOpenGLLayer, or QTMovieLayer).