I am facing problem with taking snapshot of UIView along with CAEmitterLayer. Below is the code which I am using for taking snapshot:
here editingView is my UIView:
UIGraphicsBeginImageContextWithOptions(editingView.bounds.size, NO, 0.0);
[editingView drawViewHierarchyInRect:editingView.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Please go through the below links for Gif Images
Link For For afterScreenUpdates:NO (Gif) in this case I am missing Snapshot data
Link For For afterScreenUpdates:YES (Gif) in this case uiview is blinking and then updated and also facing performance issue.
I've had similar a similar issue in the past. The following might work for you as a workaround:
UIGraphicsBeginImageContextWithOptions(editingView.bounds.size, NO, 0.0);
[editingView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Note that you may lose some accuracy in the screen capture, as it may exclude blurs, and some Core Animation features.
Edit:
There seems to be issues/tradeoffs with both renderInContext and drawViewHierarchyInRect. The following code may be worth a try (taken from here for reference):
- (CGContextRef) createBitmapContextOfSize:(CGSize) size {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (size.width * 4);
bitmapByteCount = (bitmapBytesPerRow * size.height);
colorSpace = CGColorSpaceCreateDeviceRGB();
if (bitmapData != NULL) {
free(bitmapData);
}
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL) {
fprintf (stderr, "Memory not allocated!");
return NULL;
}
context = CGBitmapContextCreate (bitmapData,
size.width,
size.height,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaNoneSkipFirst);
CGContextSetAllowsAntialiasing(context,NO);
if (context== NULL) {
free (bitmapData);
fprintf (stderr, "Context not created!");
return NULL;
}
CGColorSpaceRelease( colorSpace );
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, size.height);
CGContextConcatCTM(context, flipVertical);
return context;
}
Then you can do:
CGContextRef context = [self createBitmapContextOfSize:editingView.bounds.size];
[editingView.layer renderInContext:context];
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage* background = [UIImage imageWithCGImage: cgImage];
CGImageRelease(cgImage);
Related
How to create a reflection of UIImage without using UIImageView ? I have seen the apple code for reflection using two imageViews but i don't want to add imageview in my application i just want whole image of the original image and the reflected image. Does any body knows how to do that.
This is from UIImage+FX.m, created by Nick Lockwood
UIImage * processedImage = //here your image
processedImage = [processedImage imageWithReflectionWithScale:0.15f
gap:10.0f
alpha:0.305f];
Scale is the size of the reflection, gap the distance between the image and the reflection, and alpha the alpha of the reflection
- (UIImage *)imageWithReflectionWithScale:(CGFloat)scale gap:(CGFloat)gap alpha:(CGFloat)alpha
{
//get reflected image
UIImage *reflection = [self reflectedImageWithScale:scale];
CGFloat reflectionOffset = reflection.size.height + gap;
//create drawing context
UIGraphicsBeginImageContextWithOptions(CGSizeMake(self.size.width, self.size.height + reflectionOffset * 2.0f), NO, 0.0f);
//draw reflection
[reflection drawAtPoint:CGPointMake(0.0f, reflectionOffset + self.size.height + gap) blendMode:kCGBlendModeNormal alpha:alpha];
//draw image
[self drawAtPoint:CGPointMake(0.0f, reflectionOffset)];
//capture resultant image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//return image
return image;
}
and
- (UIImage *)reflectedImageWithScale:(CGFloat)scale
{
//get reflection dimensions
CGFloat height = ceil(self.size.height * scale);
CGSize size = CGSizeMake(self.size.width, height);
CGRect bounds = CGRectMake(0.0f, 0.0f, size.width, size.height);
//create drawing context
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0f);
CGContextRef context = UIGraphicsGetCurrentContext();
//clip to gradient
CGContextClipToMask(context, bounds, [[self class] gradientMask]);
//draw reflected image
CGContextScaleCTM(context, 1.0f, -1.0f);
CGContextTranslateCTM(context, 0.0f, -self.size.height);
[self drawInRect:CGRectMake(0.0f, 0.0f, self.size.width, self.size.height)];
//capture resultant image
UIImage *reflection = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//return reflection image
return reflection;
}
gradientMask
+ (CGImageRef)gradientMask
{
static CGImageRef sharedMask = NULL;
if (sharedMask == NULL)
{
//create gradient mask
UIGraphicsBeginImageContextWithOptions(CGSizeMake(1, 256), YES, 0.0);
CGContextRef gradientContext = UIGraphicsGetCurrentContext();
CGFloat colors[] = {0.0, 1.0, 1.0, 1.0};
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGGradientRef gradient = CGGradientCreateWithColorComponents(colorSpace, colors, NULL, 2);
CGPoint gradientStartPoint = CGPointMake(0, 0);
CGPoint gradientEndPoint = CGPointMake(0, 256);
CGContextDrawLinearGradient(gradientContext, gradient, gradientStartPoint,
gradientEndPoint, kCGGradientDrawsAfterEndLocation);
sharedMask = CGBitmapContextCreateImage(gradientContext);
CGGradientRelease(gradient);
CGColorSpaceRelease(colorSpace);
UIGraphicsEndImageContext();
}
return sharedMask;
}
it returns the image with the reflection.
I wrote a blog post awhile ago that uses a view's CAReplicatorLayer. It's really designed for handling dynamics updates to a view with reflection, but I think it would work for what you want to do as well.
You can render the image and its reflection inside a graphics context and then get a CGImage from the context and from that again a UIImage.
But the question is: why not use two views? Why would you think it is a problem or limitation?
Here is the code below: Not sure what I'm doing wrong, here. It used to work when I was rendering the layer from a NIB file. I tried to change it by creating the view programmatically. Now it renders the layer but with no tranform applied.
- (NSBitmapImageRep*)getCurrentFrame
{
CGContextRef bitmapContext = NULL;
CGColorSpaceRef colorSpace;
int bitmapByteCount;
int bitmapBytesPerRow;
int pixelsHigh = (int)[fixedWidthStringView bounds].size.height;
int pixelsWide = (int)[fixedWidthStringView bounds].size.width;
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
bitmapContext = CGBitmapContextCreate (NULL,
pixelsWide,
pixelsHigh,
8,
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
if (bitmapContext== NULL)
{
NSLog(#"Failed to create bitmapContext.");
return nil;
}
CGColorSpaceRelease( colorSpace );
[CATransaction setDisableActions:YES];
fixedWidthStringView.layer.transform = CATransform3DScale(fixedWidthStringView.layer.transform, .5, 1, 1);
[CATransaction commit];
[fixedWidthStringView.layer renderInContext:bitmapContext];
CGImageRef img = CGBitmapContextCreateImage(bitmapContext);
NSBitmapImageRep *bitmap = [[NSBitmapImageRep alloc] initWithCGImage:img];
CFRelease(img);
return bitmap;
}
Fixed it myself. It turns out that this works as long as I add my NSView to another NSView and render the container.
i am using this function to create Screenshots of the my IPAD App. I am using the Sparrow Framework in my Project. SPDisplayObject uses OpenGl-ES based rendering.
#implementation SPDisplayObject (ScreenshotFromSPDisplayObject)
- (UIImage *)getImageScreenshot{
int WIDTH = 1024;
int HEIGHT = 768;
CGSize size = CGSizeMake(WIDTH,HEIGHT);
//Create un buffer for pixels
GLuint bufferLenght=size.width*size.height*4;
GLubyte *buffer = (GLubyte *) malloc(bufferLenght);
//Read Pixels from OpenGL
glReadPixels(0,0,size.width,size.height,GL_RGBA,GL_UNSIGNED_BYTE,buffer);
//Make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, bufferLenght, NULL);
//Configure image
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * size.width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef iref = CGImageCreate(size.width,size.height,bitsPerComponent,bitsPerPixel,bytesPerRow,colorSpaceRef,bitmapInfo,provider,NULL,NO,renderingIntent);
uint32_t *pixels = (uint32_t *)malloc(bufferLenght);
CGContextRef context = CGBitmapContextCreate(pixels, WIDTH, HEIGHT, 8, WIDTH*4, CGImageGetColorSpace(iref), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextTranslateCTM(context,0, size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, CGRectMake(0.0, 0.0, size.width, size.height), iref);
UIImage* screenshot = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)];
UIGraphicsEndImageContext();
//free memory
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
CGImageRelease(iref);
CGContextRelease(context);
free(buffer);
free(pixels);
return screenshot;
}
#end
I use it like this from an UIViewController:
#interface
UIImageView *screenShot;
UIImage *tempImage;
-(void) deactivePage
{
// attach screenshot
tempImage = [self.stage getImageScreenshot];
screenShot = [[UIImageView alloc] initWithFrame:CGRectMake(0,0,1024,768)];
screenShot.image = tempImage;
[self.view addSubview:screenShot];
}
- (void)dealloc
{
screenShot.image = nil;
[screenShot removeFromSuperview];
[screenShot release];
[super dealloc];
}
The UIViewController is released and deallocated aprox. 5 Seconds after the "deactivePage" function is called.
The Screenshot is used for a View Transition.
Taking Screenshots works like a charm, but with every Screenshot my App is growing around 10 MBs So i can do this around 15 Times till the app crashes.
So where is the leak? I am stuck.. :-(
In the getImageScreenshot function you do this:
UIImage* screenshot = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)];
which creates a CGImageRef and then creates (autorelease) an UIImage from it.
What happens here is that this CGImageRef remains alive and is never released, so it's leaking.
What you should do, instead, is this:
CGImageRef myCGImage = CGBitmapContextCreateImage(context);
UIImage* screenshot = [UIImage imageWithCGImage:myCGImage];
CGImageRelease(myCGImage);
Have you tried to see it using Instruments (Leaks or Heapshots)? you should see these CGImareRef elements still alive.
I don't see where you deallocate tempImage in the UIViewController when it's going down.
I'm writing code to use UIImagePickerController. Corey previously posted some nice sample code on SO related to cropping and scaling. However, it doesn't have implementations of cropImage:to:andScaleTo: nor straightenAndScaleImage().
Here's how they're used:
newImage = [self cropImage:originalImage to:croppingRect andScaleTo:scaledImageSize];
...
UIImage *rotatedImage = straightenAndScaleImage([editInfo objectForKey:UIImagePickerControllerOriginalImage], scaleSize);
Since I'm sure someone must be using something very similar to Corey's sample code, there's probably an existing implementation of these two functions. Would someone like to share?
If you check the post you linked to, you'll see a link to the apple dev forums where I got some of this code, here are the methods you are asking about. Note: I may have made some changes relating to data types, but I can't quite remember. It should be trivial for you to adjust if needed.
- (UIImage *)cropImage:(UIImage *)image to:(CGRect)cropRect andScaleTo:(CGSize)size {
UIGraphicsBeginImageContext(size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGImageRef subImage = CGImageCreateWithImageInRect([image CGImage], cropRect);
CGRect myRect = CGRectMake(0.0f, 0.0f, size.width, size.height);
CGContextScaleCTM(context, 1.0f, -1.0f);
CGContextTranslateCTM(context, 0.0f, -size.height);
CGContextDrawImage(context, myRect, subImage);
UIImage* croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRelease(subImage);
return croppedImage;
}
UIImage *straightenAndScaleImage(UIImage *image, int maxDimension) {
CGImageRef img = [image CGImage];
CGFloat width = CGImageGetWidth(img);
CGFloat height = CGImageGetHeight(img);
CGRect bounds = CGRectMake(0, 0, width, height);
CGSize size = bounds.size;
if (width > maxDimension || height > maxDimension) {
CGFloat ratio = width/height;
if (ratio > 1.0f) {
size.width = maxDimension;
size.height = size.width / ratio;
}
else {
size.height = maxDimension;
size.width = size.height * ratio;
}
}
CGFloat scale = size.width/width;
CGAffineTransform transform = orientationTransformForImage(image, &size);
UIGraphicsBeginImageContext(size);
CGContextRef context = UIGraphicsGetCurrentContext();
// Flip
UIImageOrientation orientation = [image imageOrientation];
if (orientation == UIImageOrientationRight || orientation == UIImageOrientationLeft) {
CGContextScaleCTM(context, -scale, scale);
CGContextTranslateCTM(context, -height, 0);
}else {
CGContextScaleCTM(context, scale, -scale);
CGContextTranslateCTM(context, 0, -height);
}
CGContextConcatCTM(context, transform);
CGContextDrawImage(context, bounds, img);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
I'm trying to create an image mask that from a composite of two existing images.
First I start with creating the composite which consists of a small image that is the masking image, and a larger image which is the same size as the background:
UIImage *baseTextureImage = [UIImage imageNamed:#"background.png"];
UIImage *maskImage = [UIImage imageNamed:#"my_mask.jpg"];
UIImage *shapesBase = [UIImage imageNamed:#"largerimage.jpg"];
UIImage *maskImageFull;
CGSize finalSize = CGSizeMake(480.0, 320.0);
UIGraphicsBeginImageContext(finalSize);
[shapesBase drawInRect:CGRectMake(0, 0, 480, 320)];
[maskImage drawInRect:CGRectMake(150, 50, 250, 250)];
maskImageFull = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
I can output this UIImage (MaskImageFull) and it looks right. It is a full size background size and it has a white background with my mask object in black, in the right place on the screen.
I then pass the MaskImageFull UIImage through this:
CGImageRef maskRef = [maskImage CGImage];
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([image CGImage], mask);
UIImage *retImage= [UIImage imageWithCGImage:masked];
The problem is that the retImage is all black. If I send a pre-made UIImage in as the mask it works fine, it is just when I try to make it from multiple images that it breaks.
I thought it was a colorspace thing but couldn't seem to fix it. Any help is much appreciated!
I tried the same thing with CGImageCreateWithMask, and got the same result. The solution I found was to use CGContextClipToMask instead:
CGContextRef mainViewContentContext;
CGColorSpaceRef colorSpace;
colorSpace = CGColorSpaceCreateDeviceRGB();
// create a bitmap graphics context the size of the image
mainViewContentContext = CGBitmapContextCreate (NULL, targetSize.width, targetSize.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
// free the rgb colorspace
CGColorSpaceRelease(colorSpace);
if (mainViewContentContext==NULL)
return NULL;
CGImageRef maskImage = [[UIImage imageNamed:#"mask.png"] CGImage];
CGContextClipToMask(mainViewContentContext, CGRectMake(0, 0, targetSize.width, targetSize.height), maskImage);
CGContextDrawImage(mainViewContentContext, CGRectMake(thumbnailPoint.x, thumbnailPoint.y, scaledWidth, scaledHeight), self.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
// convert the finished resized image to a UIImage
UIImage *theImage = [UIImage imageWithCGImage:mainViewContentBitmapContext];
// image is retained by the property setting above, so we can
// release the original
CGImageRelease(mainViewContentBitmapContext);
// return the image
return theImage;
- (UIImage *) maskImage:(UIImage *)image {
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
UIImage *maskImage = [UIImage imageNamed:#"MaskFinal.png"];
CGImageRef maskImageRef = [maskImage CGImage];
// create a bitmap graphics context the size of the image
CGContextRef mainViewContentContext = CGBitmapContextCreate (NULL, maskImage.size.width, maskImage.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
if (mainViewContentContext==NULL)
return NULL;
CGFloat ratio = 0;
ratio = maskImage.size.width/ image.size.width;
if(ratio * image.size.height < maskImage.size.height) {
ratio = maskImage.size.height/ image.size.height;
}
CGRect rect1 = {{0, 0}, {maskImage.size.width, maskImage.size.height}};
CGRect rect2 = {{-((image.size.width*ratio)-maskImage.size.width)/2 , -((image.size.height*ratio)-maskImage.size.height)/2}, {image.size.width*ratio, image.size.height*ratio}};
CGContextClipToMask(mainViewContentContext, rect1, maskImageRef);
CGContextDrawImage(mainViewContentContext, rect2, image.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef newImage = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
UIImage *theImage = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
// return the image
return theImage;
}
The image to be masked MUST be created with an alpha channel. The Alpha channel may not be created from the code.