CAMetalLayer with texture rendered by CARenderer is not visible? - objective-c

I'm using a CARenderer to render another CALayer tree into a CAMetalLayer, which I hope to use as the mask of yet another layer. For testing purposes, I've tried adding the CAMetalLayer as a normal sublayer instead of a mask.
The layer object below is not visible after adding it to a superlayer that is definitely visible. I've confirmed the frame of the layer is not a problem. Here's how I'm making the CAMetalLayer and its CARenderer.
CAMetalLayer *layer = [CAMetalLayer layer];
layer.frame = bounds;
layer.device = MTLCreateSystemDefaultDevice();
//layer.opaque = NO;
//layer.framebufferOnly = NO;
id<CAMetalDrawable> drawable = layer.nextDrawable;
_lastDrawable = drawable;
_renderer = [CARenderer rendererWithMTLTexture:drawable.texture options:nil];
_renderer.layer = self.superview.layer;
_renderer.bounds = bounds;
👇 By creating a CIImage and inspecting it with the debugger, I've confirmed the CARenderer is updating the Metal texture.
CIImage *img = [CIImage imageWithMTLTexture:_lastDrawable.texture options:nil];
But when I set the superlayer of the CAMetalLayer, it's nowhere to be seen.
[self.layer addSublayer:layer];
Here's how I'm using the CARenderer:
[_renderer beginFrameAtTime:CACurrentMediaTime() timeStamp:NULL];
[_renderer addUpdateRect:bounds];
[_renderer render];
[_renderer endFrame];
That last snippet runs frequently.
edit 1
I've added a backgroundColor and now the layer is visible, but its texture is not being rendered inside it.
layer.backgroundColor = NSColor.yellowColor.CGColor;

I would recommend just setting the original layer as a mask rather than trying to render it to a texture first; you’re sort of duplicating the work that CA would be doing anyway.
If you really need control over when the mask layer tree gets rendered—and again you should definitely try the standard method first—the right way to do this would be to create an IOSurface-backed MTLTexture rather than using a CAMetalLayer’s drawable, draw into the texture with your CARenderer, set the IOSurface as the contents of a regular CALayer, and use that layer as the mask.

Related

SKCropNode masking edge anti-aliasing

I created a circular mask and animate a sprite inside the mask by using sprite kit SKCropNode class. But the edge of the mask looks pixellated.
Is there a way to use anti-aliasing to smooth the edges?
If you want to mask any SKNode/SKSpriteNode with an anti-aliasing effect - you can use SKEffectNode instead of SKCropNode. It works with animated nodes as well. Here is an example:
// Set up your node
SKNode *nodeToMask = [SKNode node];
// ...
// Set up the mask node
SKEffectNode *maskNode = [SKEffectNode node];
// Create a filter
CIImage *maskImage = [[CIImage alloc] initWithCGImage:[UIImage imageNamed:#"your_mask_image"].CGImage];
CIFilter *maskFilter = [CIFilter filterWithName:#"CISourceInCompositing"
keysAndValues:#"inputBackgroundImage", maskImage, nil];
// Set the filter
maskNode.filter = maskFilter;
// Add childs
[maskNode addChild:nodeToMask];
[scene addChild:maskNode];
From the official docs:
If the pixel in the mask has an alpha value of less than 0.05, the
image pixel is masked out.
So, SKCropNode doesn't support per pixel alpha values. It either draws the pixels or it doesn't. If alpha (as is used by anti-aliasing) is important to you, you should consider alternative routes. For example:
How to Mask an UIImageView
As of now (June 2018) I can confirm that the SKCropNode will do per-pixel alpha blending. If the crop node has an SKSpriteNode as a mask, and the mask node uses a texture with an anti-aliased mask image, the crop node will do the proper blending.

Draw several UIViews on one layer

Is it possible to draw several UIViews with custom drawing on single CALayer so that they dont have backing store each?
UPDATE:
I have several uiviews of the same size which have same superview. Right now each of them has custom drawing. And because of big size they create 600-800 mb backing stores on iPad 3. so i want to compose their output on one view and have several times less memory consumed.
Every view has it's own Layer and you can't change that.
You could enable shouldRasterize to flatten a view hierarchy, which might help in some cases, but that needs gpu memory.
another way could be to create an image context and merge the drawings into the image and set that as the layer content.
in one of the wwdc session videos of last year which was about drawing showed a drawing app where many strokes were transfered into a image to speed up drawing.
Since the views will share the same backing store, I assume you want them to share the same image that results from the layer's custom drawing, right? I believe this can be done with something similar to:
// create your custom layer
MyCustomLayer* layer = [[MyCustomLayer alloc] init];
// create the custom views
UIView* view1 = [[UIView alloc] initWithFrame:CGRectMake( 0, 0, layer.frame.size.width, layer.frame.size.height)];
UIView* view2 = [[UIView alloc] initWithFrame:CGRectMake( 100, 100, layer.frame.size.width, layer.frame.size.height)];
// have the layer render itself into an image context
UIGraphicsBeginImageContext( layer.frame.size );
CGContextRef context = UIGraphicsGetCurrentContext();
[layer drawInContext:context];
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// set the backing stores (a.k.a the 'contents' property) of the view layers to the resulting image
view1.layer.contents = (id)image.CGImage;
view2.layer.contents = (id)image.CGImage;
// assuming we're in a view controller, all those views to the hierarchy
[self.view addSubview:view1];
[self.view addSubview:view2];

CALayer Audio Indicator with Mask

Ok, What I want to do is create an audio indicator, basically overlay a mask or layer onto an image with a background color and opacity... so it looks like a red level indicator is bouncing up and down overtop of a microphone image, I got this to work in a very poor way updating the image each time with a UIImage mask but this was very inefficient.
Im trying to get it to work now with a CALayer which it does and better than the first trial and error way I tried. The problem now is Im only showing a rectangle and the corresponding level with it. I want it to be bounded by the microphone image, so it looks half full for instance, when I mask to bounds the rectangle takes the shape of the microphone and jumps up and down in that shape instead of "filling" the image.
Hopefully this isn't too confusing, I hope you can understand the premise and help!! Here is some code I have working now, in the wrong way:
self.image = [UIImage imageNamed:#"img_icon_microphone.png"];
CALayer *maskLayer = [CALayer layer];
maskLayer.frame = CGRectMake(0.f, 0.f, 200.f, 200.f);
maskLayer.contents = (id) [UIImage imageNamed:#"img_icon_microphone.png"].CGImage;
micUpdateLayer = [CALayer layer];
micUpdateLayer.frame = CGRectMake(0.f, 200.f, 200.f, -5.f);
micUpdateLayer.backgroundColor = [UIColor redColor].CGColor;
micUpdateLayer.opacity = 0.5f;
[self.layer addSublayer:micUpdateLayer];
Im then just using a NSTimer and a call to a function which simply updates the micUpdateLayer.frame y to make it appear to be moving with the audio input.
Thank you for any suggestions!

ios: Screenshot doesn't display mask Layer

I have a view with a certain background color. I am masking this view with the following code:
UIView *colorableView = [[UIView* alloc] init];
colorableView.backgroundColor = someColor;
CALayer *maskLayer = [CALayer layer];
maskLayer.contents = (id)[UIImage imageNamed:maskImageName].CGImage;
colorableView.layer.mask = maskLayer;
Ok everything works fine there. The view gets masked, so some parts are transparent. Now I make a screenshot of this view:
CGRect frame = colorableView.frame;
UIGraphicsBeginImageContext(frame.size);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(someUninterestingCodeToGetACorrectPosition);
[self.view.layer renderInContext:c];
UIImage *screenShotImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenShotImage;
Taking a screenshot works (actually I display some other stuff above the view too and that gets displayed in the screenshot as well), but somehow, the mask is not recognized. Meaning what I get is a screenshot of a fully colored view (a rectangle) without the mask hiding some parts of it.
I guess ´UIGraphicsGetImageFromCurrentImageContext()`doesn't work with mask layers, so what can I do about it? I need to have a UIImage to display the screenshot in a mail.
Thanks a lot in advance
One way to fix this can be to use Quartz functions to clip the view (CGContextClip, I don't remember exactly, you'll have to dig a little bit into the documentation).
Hope this will help

How do I render a view which contains Core Animation layers to a bitmap?

I am using an NSView to host several Core Animation CALayer objects. What I want to be able to do is grab a snapshot of the view's current state as a bitmap image.
This is relatively simple with a normal NSView using something like this:
void ClearBitmapImageRep(NSBitmapImageRep* bitmap) {
unsigned char* bitmapData = [bitmap bitmapData];
if (bitmapData != NULL)
bzero(bitmapData, [bitmap bytesPerRow] * [bitmap pixelsHigh]);
}
#implementation NSView (Additions)
- (NSBitmapImageRep*)bitmapImageRepInRect:(NSRect)rect
{
NSBitmapImageRep* imageRep = [self bitmapImageRepForCachingDisplayInRect:rect];
ClearBitmapImageRep(imageRep);
[self cacheDisplayInRect:rect toBitmapImageRep:imageRep];
return imageRep;
}
#end
However, when I use this code, the Core Animation layers are not rendered.
I have investigated CARenderer, as it appears to do what I need, however I cannot get it to render my existing layer tree. I tried the following:
NSOpenGLPixelFormatAttribute att[] =
{
NSOpenGLPFAWindow,
NSOpenGLPFADoubleBuffer,
NSOpenGLPFAColorSize, 24,
NSOpenGLPFAAlphaSize, 8,
NSOpenGLPFADepthSize, 24,
NSOpenGLPFANoRecovery,
NSOpenGLPFAAccelerated,
0
};
NSOpenGLPixelFormat *pixelFormat = [[NSOpenGLPixelFormat alloc] initWithAttributes:att];
NSOpenGLView* openGLView = [[NSOpenGLView alloc] initWithFrame:[self frame] pixelFormat:pixelFormat];
NSOpenGLContext* oglctx = [openGLView openGLContext];
CARenderer* renderer = [CARenderer rendererWithCGLContext:[oglctx CGLContextObj] options:nil];
renderer.layer = myContentLayer;
[renderer render];
NSBitmapImageRep* bitmap = [oglView bitmapImageRepInRect:[oglView bounds]];
However, when I do this I get an exception:
CAContextInvalidLayer -- layer <CALayer: 0x1092ea0> is already attached to a context
I'm guessing that this must be because the layer tree is hosted in my NSView and therefore attached to its context. I don't understand how I can detach the layer tree from the NSView in order to render it to a bitmap, and it's non-trivial in this case to create a duplicate layer tree.
Is there some other way to get the CALayers to render to a bitmap? I can't find any sample code anywhere for doing this, in fact I can't find any sample code for CARenderer at all.
There is a great post on "Cocoa is my girlfriend" about recording Core Animations. The author captures the whole animation into a movie, but you could use the part where he grabs a single frame.
Jump to the "Obtaining the Current Frame" section in this article:
http://www.cimgf.com/2009/02/03/record-your-core-animation-animation/
The basic idea is:
Create a CGContext
Use CALayer's renderInContext:
Create a NSBitmapImageRep from the
context (using
CGBitmapContextCreateImage and
NSBitmapImageRep's initWithCGImage)
Update:
I just read, that the renderInContext: method does not support all kind of layers in Mac OS X 10.5.
It does not work for the following layers classes:
QCCompositionLayer
CAOpenGLLayer
QTMovieLayer
If you want sample code for how to render a CALayer hierarchy to an NSImage (or UIImage for the iPhone), you can look at the Core Plot framework's CPLayer and its -imageOfLayer method. We actually created a rendering pathway that is independent of the normal -renderInContext: process used by CALayer, because the normal way does not preserve vector elements when generating PDF representations of layers. That's why you'll see the -recursivelyRenderInContext: method in this code.
However, this won't help you if you are trying to capture from any of the layer types mentioned by weichsel (QCCompositionLayer, CAOpenGLLayer, or QTMovieLayer).