I'm making an app where the user takes a photo, an overlay is placed over the image, and the user can then save the image or upload it to Facebook or other sites.
I have managed to get the app to take the photo, and to make the overlay I am using a UIImageView, which is placed over the top of the photo.
I'm not sure how exactly I can export the image, with the overlay, as one file, to the photo library, or to Facebook.
This should give you the idea. This code may need a tweak or two, though. (I admit that I haven't run it.)
Your UIImageView's base layer can render itself into a CoreGraphics bitmap context (this will include all the sublayers, of course), which can then provide you with a new UIImage.
UIImage *finalImage;
// Get the layer
CALayer *layer = [myImageView layer];
// Create a bitmap context and make it the current context
UIGraphicsBeginImageContext(layer.bounds.size);
// Get a reference to the context
CGContextRef ctx = UIGraphicsGetCurrentContext();
// Ask the layer to draw itself
[layer drawInContext:ctx];
// Then ask the context itself for the image
finalImage = UIGraphicsGetImageFromCurrentImageContext();
// Finally, pop the context off the stack
UIGraphicsEndImageContext();
These functions are all described in the UIKit Function Reference.
There's also a slightly more complicated way that gives you more control over things like color, which involves creating a CGBitmapContext and getting a CGImageRef, from which you can again produce a UIImage. That is described in the Quartz 2D Programming Guide, section "Creating a Bitmap Graphics Context".
Related
I have two ImageViews, one called imageView and the other called subView (which is a subview of imageView).
I want to blend the images on these views together, with the user being able to switch the alpha of the blend with a pan. My code works, but right now, the code is slow as we are redrawing the image each time the pan gesture is moved. Is there a faster/more efficient way of doing this?
BONUS Q: I want to allow for my subView image to drawn zoomed in. Currently I've set my subView to be UIViewContentModeCenter, however I can't seem to draw a zoomed in part of my image with this content mode. Is there any way around this?
My drawrect:
- (void)drawRect:(CGRect)rect
{
float xCenter = self.center.x - self.currentImage1.size.width/2.0;
float yCenter = self.center.y - self.currentImage1.size.height/2.0;
subView.alpha = self.blendAmount; // Customize the opacity of the top image.
UIGraphicsBeginImageContext(self.currentImage1.size);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(c, kCGBlendModeColorBurn);
[imageView.layer renderInContext:c];
self.blendedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self.blendedImage drawAtPoint:CGPointMake(xCenter,yCenter)];
}
You need to use GPU for image processing which is far faster than using CPU (as you're doing right now).
You can use Core Image framework which is very fast and easy to use but requires iOS 5, or you can use Open GL directly but you need to be experienced and have some knowledge about Open GL Shading.
A bit of a newbie question for iOS, just started learning about Core Graphics but I'm still not sure how it's integrated into the actual view
So I have a method that creates a CGLayer ref, draws some bezier curves on it, and then returns the CGLayer; here's a very condensed version of my procedure:
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGLayerRef layer = CGLayerCreateWithContext(ctx, CGSizeMake(300.0,300.0), NULL);
CGContextRef layerCtx = CGLayerGetContext(layer);
Then I add some bezier fills that are dynamically generated by parsing an SVG file.
I've already verified that the SVG is being parsed properly, and I am using this code http://ponderwell.net/2011/05/converting-svg-paths-to-objective-c-paths/ to parse the path instructions (from the path tags' d parameters) into UIBezierPath objects
So I start by creating the fill
CGColorRef color = [UIColor colorWithRed:red green:green blue:blue alpha:1.0].CGColor;
CGContextSetFillColorWithColor(layerCtx, color);
Then adding the UIBezierPath
SvgToBezier *bezier = [[SvgToBezier alloc] initFromSVGPathNodeDAttr:pathAttr rect:CGRectMake(0, 0, 300, 300)];
UIBezierPath *bezierPath = bezier.bezier;
CGContextAddPath(layerCtx, bezierPath.CGPath);
CGContextDrawPath(layerCtx, kCGPathFill);
And finally I return the CGLayerRef like so
return layer;
My question: Now that I have this layer ref, I want to draw it to a UIView (or whatever type of object would be best to draw to)
The ultimate goal is to have the layer show up in a UITableViewCell.
Also, the method I've described above exists in a class I've created from scratch (which inherits from NSObject). There is separate method which uses this method to assign the layer ref to a property of the class instance.
I would be creating this instance from the view controller connected to my table view. Then hopefully adding the layer to a UITableView cell within the "cellForRowAtIndexPath" method.
I hope I included all of the relevant information and I greatly appreciate your help.
Cheers
Short answer is the view you want to draw on's layer needs to be set to your newly created layer.
[myCurrentViewController.view setLayer:newLayer];
Long answer can be found in the excellent Quartz2D programming guide, found here.
I am new to Objective-C and I want to let a screenshot made with
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
...
UIGraphicsBeginImageContext(imageSize);
...
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
"fly in" (rotating and stoping in random angle) inside an own image container, looking like a small photo (shadow, white border). how could I realize that?
Since you have a vague question - and your code shows only how you are getting the screen image - here's a general answer.
Put the image on a CAImageLayer
Add a border or a shadow or whatever other chrome you need to your image layer (or on another layer underneath it)
Use Core Animation to animate this CAImageLayer to whatever position or state you want it to be in.
You can accomplish it with OpenGL ES.
Here is a great project which provides generic base OpenGL ES view transition class.
https://github.com/epatel/EPGLTransitionView
There are several predefined transitions, you can take as an example https://github.com/epatel/EPGLTransitionView/tree/master/src
Demo3Transition actually shows how to rotate the screenshot
https://github.com/epatel/EPGLTransitionView/blob/master/src/Demo3Transition.m
You can view available transitions in actions if you launch DemoProject.
Hey, I've got a view with some items I lay out in it, and a tiled background image. I've set it's backgroundColor to a CGPatternRef. I am trying to get it so I can scale the background image in tandem with the contents in my layer. I've got the background image scaling fine, I just can't seem to figure out how to make it update when the scale changes. Here is my drawCallback
static void drawPatternImage (void *info, CGContextRef ctx)
{
CGImageRef image = (CGImageRef) info;
CGContextDrawTiledImage(ctx, CGRectMake(0,0, CGImageGetWidth(image)*scaleValue, CGImageGetHeight(image)*scaleValue), image);
}
the image is drawn how I expect, but when I change the scale value, and then call setNeedsDisplay on my layer, the callback method isn't called again, and my background isn't updated. Is this the best way to approach this problem? Or is there a more efficient way of drawing a scalable tileable image in my view. I implemented - (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx and called CGContextDrawTiledImage from in there, but when scaling my view, the updating was very choppy.
Any guidance would be appreciated.
Thanks
The CGPattern needs to know the size of the pattern, you pass that in your call to CGPatternCreate. Just drawing in a different size in you callback isn't going to help, so you must destroy and re-setup the pattern, passing the new size to CGPatternCreate.
I am using an NSView to host several Core Animation CALayer objects. What I want to be able to do is grab a snapshot of the view's current state as a bitmap image.
This is relatively simple with a normal NSView using something like this:
void ClearBitmapImageRep(NSBitmapImageRep* bitmap) {
unsigned char* bitmapData = [bitmap bitmapData];
if (bitmapData != NULL)
bzero(bitmapData, [bitmap bytesPerRow] * [bitmap pixelsHigh]);
}
#implementation NSView (Additions)
- (NSBitmapImageRep*)bitmapImageRepInRect:(NSRect)rect
{
NSBitmapImageRep* imageRep = [self bitmapImageRepForCachingDisplayInRect:rect];
ClearBitmapImageRep(imageRep);
[self cacheDisplayInRect:rect toBitmapImageRep:imageRep];
return imageRep;
}
#end
However, when I use this code, the Core Animation layers are not rendered.
I have investigated CARenderer, as it appears to do what I need, however I cannot get it to render my existing layer tree. I tried the following:
NSOpenGLPixelFormatAttribute att[] =
{
NSOpenGLPFAWindow,
NSOpenGLPFADoubleBuffer,
NSOpenGLPFAColorSize, 24,
NSOpenGLPFAAlphaSize, 8,
NSOpenGLPFADepthSize, 24,
NSOpenGLPFANoRecovery,
NSOpenGLPFAAccelerated,
0
};
NSOpenGLPixelFormat *pixelFormat = [[NSOpenGLPixelFormat alloc] initWithAttributes:att];
NSOpenGLView* openGLView = [[NSOpenGLView alloc] initWithFrame:[self frame] pixelFormat:pixelFormat];
NSOpenGLContext* oglctx = [openGLView openGLContext];
CARenderer* renderer = [CARenderer rendererWithCGLContext:[oglctx CGLContextObj] options:nil];
renderer.layer = myContentLayer;
[renderer render];
NSBitmapImageRep* bitmap = [oglView bitmapImageRepInRect:[oglView bounds]];
However, when I do this I get an exception:
CAContextInvalidLayer -- layer <CALayer: 0x1092ea0> is already attached to a context
I'm guessing that this must be because the layer tree is hosted in my NSView and therefore attached to its context. I don't understand how I can detach the layer tree from the NSView in order to render it to a bitmap, and it's non-trivial in this case to create a duplicate layer tree.
Is there some other way to get the CALayers to render to a bitmap? I can't find any sample code anywhere for doing this, in fact I can't find any sample code for CARenderer at all.
There is a great post on "Cocoa is my girlfriend" about recording Core Animations. The author captures the whole animation into a movie, but you could use the part where he grabs a single frame.
Jump to the "Obtaining the Current Frame" section in this article:
http://www.cimgf.com/2009/02/03/record-your-core-animation-animation/
The basic idea is:
Create a CGContext
Use CALayer's renderInContext:
Create a NSBitmapImageRep from the
context (using
CGBitmapContextCreateImage and
NSBitmapImageRep's initWithCGImage)
Update:
I just read, that the renderInContext: method does not support all kind of layers in Mac OS X 10.5.
It does not work for the following layers classes:
QCCompositionLayer
CAOpenGLLayer
QTMovieLayer
If you want sample code for how to render a CALayer hierarchy to an NSImage (or UIImage for the iPhone), you can look at the Core Plot framework's CPLayer and its -imageOfLayer method. We actually created a rendering pathway that is independent of the normal -renderInContext: process used by CALayer, because the normal way does not preserve vector elements when generating PDF representations of layers. That's why you'll see the -recursivelyRenderInContext: method in this code.
However, this won't help you if you are trying to capture from any of the layer types mentioned by weichsel (QCCompositionLayer, CAOpenGLLayer, or QTMovieLayer).