Mask UIView with UIImage b/w iOS - objective-c

I'm developing a iOS app for iPad. I'd like to mask a UIView with an image in black and white. So the black part of the image is what you can see of the view.
If tried different codes like one below, but they don't work...
UIImage *_maskingImage = [UIImage imageNamed:#"ipadmask.jpg"];
CALayer *_maskingLayer = [CALayer layer];
_maskingLayer.frame = vistafunda.bounds;
[_maskingLayer setContents:(id)[_maskingImage CGImage]];
[vistafunda.layer setMask:_maskingLayer];
vistafunda.layer.masksToBounds = YES;
They ipadmask.jpg is that:
Thanks for your help!

apple documentation about CALayer mask property:
An optional layer whose alpha channel is used as a mask to select
between the layer's background and the result of compositing the
layer's contents with its filtered background.
So you should fix the alpha channel of you image. Black pixels should be opaque and white - totaly transparent.

Related

Get UIImage from UIImageView with masked subviews

Through Face Detection, I want to blur eyes and mouth of a person. So I have a imageView that contains 3 subviews (2 per eye and the mouth). Each one of these subviews were masked with a PNG shape (with background clear) for avoiding to show rectangle.
My imageView in screen remain so: http://screencast.com/t/ak4SkNXM0I
And I want to obtain the image for storing in another place, so I've tried this:
CGSize size = [imageView bounds].size;
UIGraphicsBeginImageContext(size);
[[imageView layer] renderInContext:UIGraphicsGetCurrentContext()];
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
But finalImage is an image like this:
http://screencast.com/t/eDlvGqqY
My subViews (eyes and mouth) are not masked as above.
Any idea?
Thanks.
Edit:
I have to use library compatible with ios6
You can check the new API added to iOS7. Try one of the following methods:
snapshotViewAfterScreenUpdates:
resizableSnapshotViewFromRect:afterScreenUpdates:withCapInsets: for resizable image
drawViewHierarchyInRect:afterScreenUpdates:

CALayer Audio Indicator with Mask

Ok, What I want to do is create an audio indicator, basically overlay a mask or layer onto an image with a background color and opacity... so it looks like a red level indicator is bouncing up and down overtop of a microphone image, I got this to work in a very poor way updating the image each time with a UIImage mask but this was very inefficient.
Im trying to get it to work now with a CALayer which it does and better than the first trial and error way I tried. The problem now is Im only showing a rectangle and the corresponding level with it. I want it to be bounded by the microphone image, so it looks half full for instance, when I mask to bounds the rectangle takes the shape of the microphone and jumps up and down in that shape instead of "filling" the image.
Hopefully this isn't too confusing, I hope you can understand the premise and help!! Here is some code I have working now, in the wrong way:
self.image = [UIImage imageNamed:#"img_icon_microphone.png"];
CALayer *maskLayer = [CALayer layer];
maskLayer.frame = CGRectMake(0.f, 0.f, 200.f, 200.f);
maskLayer.contents = (id) [UIImage imageNamed:#"img_icon_microphone.png"].CGImage;
micUpdateLayer = [CALayer layer];
micUpdateLayer.frame = CGRectMake(0.f, 200.f, 200.f, -5.f);
micUpdateLayer.backgroundColor = [UIColor redColor].CGColor;
micUpdateLayer.opacity = 0.5f;
[self.layer addSublayer:micUpdateLayer];
Im then just using a NSTimer and a call to a function which simply updates the micUpdateLayer.frame y to make it appear to be moving with the audio input.
Thank you for any suggestions!

ios: Screenshot doesn't display mask Layer

I have a view with a certain background color. I am masking this view with the following code:
UIView *colorableView = [[UIView* alloc] init];
colorableView.backgroundColor = someColor;
CALayer *maskLayer = [CALayer layer];
maskLayer.contents = (id)[UIImage imageNamed:maskImageName].CGImage;
colorableView.layer.mask = maskLayer;
Ok everything works fine there. The view gets masked, so some parts are transparent. Now I make a screenshot of this view:
CGRect frame = colorableView.frame;
UIGraphicsBeginImageContext(frame.size);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(someUninterestingCodeToGetACorrectPosition);
[self.view.layer renderInContext:c];
UIImage *screenShotImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenShotImage;
Taking a screenshot works (actually I display some other stuff above the view too and that gets displayed in the screenshot as well), but somehow, the mask is not recognized. Meaning what I get is a screenshot of a fully colored view (a rectangle) without the mask hiding some parts of it.
I guess ´UIGraphicsGetImageFromCurrentImageContext()`doesn't work with mask layers, so what can I do about it? I need to have a UIImage to display the screenshot in a mail.
Thanks a lot in advance
One way to fix this can be to use Quartz functions to clip the view (CGContextClip, I don't remember exactly, you'll have to dig a little bit into the documentation).
Hope this will help

UIImageView, CGImage, and Retina Art

I've got a few CALayers in my interface, and I'm drawing images directly to the layers as opposed to imageViews.
Here's a snippet:
UIImage *anImage = [UIImage imageNamed:#"anyImage"];
CGImageRef anImageRef = [anImage CGImage];
CALayer *aLayer = [CALayer layer];
CGFloat anImageWidth = CGImageGetWidth(anImageRef);
CGFloat anImageHeight = CGImageGetHeight(anImageRef);
CGRect layerFrame = CGRectMake(0,0,anImageWidth, anImageHeight);
[aLayer setLayerContents:(__bridge id)anImageRef];
[parentLayer addSublayer:aLayer];
So my problem is that I'm getting inconsistent results with the size of the image. On the retina Device, the image that appears is double the size anticipated (e.g., it matches the pixel size of the #2x image). On the simulator in retina mode, the image drawn to the layer is the anticipated size (where points match the pixels of the non retina image).
Rather than statically set the size, or halve the size (which corrects the issue on the device but breaks compatibility with non-retina displays), what is a good solution or workaround to this scenario? Why is it happening?
The UIImage contains a scale property. It will be 2.0 for retina display images. See the docs for more info.
CGImageGetWidth() and CGImageGetHeight() return the number of pixels whereas you need the image size in points. Use -[UIImage size] instead.

Drawing on top of UIImageView to make the image transparent

I am working on one iPhone App wherein I need to make a portion of the image transparent by setting its alpha level to 0 as the user moves around his finger on the image. Basically if you happen to know the app store application iSteam, user should be able to move his finger around on a top image which will make the background image transparent.
Currently I am using two UIImageView. One that holds the background image and the other on top of it which holds a darker image. Now user should be able to draw random curves on this darker image which will make the part of the background image appear on top. I am not able to figure out how should go about making the top image transparent which the top most of two UIImageView holds.
Any idea on this? Also what should I use for this? Quartz or Open GL. I am a newbie to the iPhone App Dev and have absolutely no idea about these APIs so some guidance from the experts will surely help me getting ahead with iPhone SDK Development.
The UIImageView has a layer which you can refer to as its layer and talk to when you've linked your project to QuartzCore. As the user moves a finger, clip a clear shape in an opaque-color-filled graphics context the same size as the UIImageView, turn that into a CGImageRef, set that as a CALayer's contents (again this CALayer needs to be the same size as the UIImageView), and set that layer as the UIImageView's layer.mask. Wherever the mask is clear, that punches a transparent hole in the layer, which means the view, which means the image the UIImageView is showing. (If that doesn't work, because the UIImageView doesn't like your interfering with its layer, you can use a superview of the UIImageView instead.)
EDIT (next day) - Here's sample code for a layer's delegate that punches a circular hole in the center:
-(void)drawLayer:(CALayer *)layer inContext:(CGContextRef)c {
CGRect r = CGContextGetClipBoundingBox(c);
CGRect r2 = CGRectInset(r, r.size.width/2.0 - 10, r.size.height/2.0 - 10);
UIImage* maskim;
{
UIGraphicsBeginImageContextWithOptions(r.size, NO, 0);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextAddEllipseInRect(c, r2);
CGContextAddRect(c, r);
CGContextEOClip(c);
CGContextSetFillColorWithColor(c, [UIColor blackColor].CGColor);
CGContextFillRect(c, r);
maskim = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
CALayer* mask = [CALayer layer];
mask.frame = r;
mask.contents = (id)maskim.CGImage;
layer.mask = mask;
}
So, if that layer is a view's layer, and if the UIImageView is that view's subview, a hole is punched in the UIImageView.
Here's a screen shot of the result: