iPhone screen capture for a view - objective-c

On iphone, is it possible to "screen capture" an UIView and all its subview? If it is possible, how?

I found this, but I haven't tried it myself.
Here you find the used -renderInContext.
I transformed the code above to a category on UIView.
call it like this: [aView saveScreenshotToPhotosAlbum];
#import <QuartzCore/QuartzCore.h>
- (UIImage*)captureView {
CGRect rect = [[UIScreen mainScreen] bounds];
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[self.layer renderInContext:context];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
- (void)saveScreenshotToPhotosAlbum {
UIImageWriteToSavedPhotosAlbum([self captureView], nil, nil, nil);
}

Related

How to take ScreenShot Of UIView Which is not in memory iOS

I want to convert UIView to UIImage, in which the view is in background and when I checked view hierarchy, it is there as white space, probably it wont get loaded..
What I wanted to do is load the view into memory and convert into image.
you just need to initialize your view with your required frame and then pass to below code as targetView. Initialize your view before you call screen capture method like below
UIView *targetView = [[UIView alloc]initWithFrame:CGRectMake(0, 0, 320, 568)]; //Initilization
UIImage *capturedImage = [self captureScreen:targetView]; //Call method
- (UIImage *)captureScreen:(UIView*)targetView
{
UIView* captureView = targetView;
UIGraphicsBeginImageContextWithOptions(captureView.bounds.size, captureView.opaque, 0.0);
[captureView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGRect cropRect = CGRectMake(0 ,0 ,captureView.frame.size.width ,captureView.frame.size.height);
UIGraphicsBeginImageContextWithOptions(cropRect.size, captureView.opaque, 1.2f);
[screenshot drawInRect:cropRect];
UIImage * customScreenShot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return customScreenShot;
}
Use this:
- (UIImage*)snapShot:(UIView*)myView {
UIGraphicsBeginImageContext(myView.frame.size);
[myView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return viewImage;
}

I need to merge betwwen UITexView text to UIImageView to one UIImage

I need to merge UITextView and UIImageView to one UIImage.
I try to convert UITextView (background color is clear) to UIImage but the background turn to black after this code.
#define IS_OS_7_OR_LATER ([[[UIDevice currentDevice] systemVersion] floatValue] >= 7.0)
-(UIImage*)imageFromView:(UIView*)view
{
CGFloat scale = [UIScreen mainScreen].scale;
UIImage *image;
if (IS_OS_7_OR_LATER)
{
//Optimized/fast method for rendering a UIView as image on iOS 7 and later versions.
UIGraphicsBeginImageContextWithOptions(view.bounds.size, YES, scale);
[view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
else
{
//For devices running on earlier iOS versions.
UIGraphicsBeginImageContextWithOptions(view.bounds.size,YES, scale);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
return image;
}
-(UIImage*)imageFromView:(UIView*)view
{
CGFloat scale = [UIScreen mainScreen].scale;
UIImage *image;
UIGraphicsBeginImageContextWithOptions(view.bounds.size, NO, scale);
if ([view respondsToSelector:#selector(drawViewHierarchyInRect:afterScreenUpdates:)]) {
[view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES];
}else{
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
}
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
If you can change your method to the code above
(please note the correct way to change method according to OS - if class responds to that method check) -
The opaque boolean should be NO, where your code was asking the view to be opaque. Changing it to say NO should make it clear and solve your issue. I haven't had chance to test this code, though it should work. If not then at the very least the OS checking is better this way.
I hope this help
Thanks

draw one image above another code not working

I have 2 images of same size and I need to make 3d picture in which 2nd is above 1st
I have UIImage class extension with following method
+(UIImage*)imageFrom2Images:(UIImage *)img1 with:(UIImage *)img2 {
UIGraphicsBeginImageContext(img1.size);
CGContextRef context = UIGraphicsGetCurrentContext();
UIGraphicsPushContext(context);
[img1 drawAtPoint:CGPointMake(0, 0)];
[img2 drawAtPoint:CGPointMake(0, 0)];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsPopContext();
UIGraphicsEndImageContext();
return resultingImage;
}
I've tried to save resulting image in Photo Album but it calls error and not working..It seems that this method is wrong.
UIImage*t1=[self.mw.imageFIFO lastObject];
UIImage* test=[UIImage imageFrom2Images:self.imageView.image with:t1];
UIImageWriteToSavedPhotosAlbum(test, self,
#selector(image:didFinishSavingWithError:contextInfo:), nil);
The error message I am getting is:
Error Domain=ALAssetsLibraryErrorDomain Code=-3304 "Failed to encode image for saved photos."
UserInfo=0x9398960 {NSUnderlyingError=0x935e760 "Failed to encode image for saved photos.",
NSLocalizedDescription=Failed to encode image for saved photos.}
You should never need to get the current graphics context and then push it - you're just duplicating the top of the context stack. It would be nice if it still worked, but it doesn't. Remove the calls to UIGraphicsPushContext and UIGraphicsPopContext, and it works as intended.
+(UIImage*)imageFrom2Images:(UIImage *)img1 with:(UIImage *)img2 {
UIGraphicsBeginImageContext(img1.size);
[img1 drawAtPoint:CGPointMake(0, 0)];
[img2 drawAtPoint:CGPointMake(0, 0)];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultingImage;
}
To detect these problems in future, check the output from imageFrom2Images. As originally implemented, it returned nil, so it's not surprising that the following calls didn't know what to do. (If it had returned an image object, the next step would be to display it inside a UIImageView, to make sure it's the correct image.)
This method should do it:
-(UIImage *)drawFirstImage:(UIImage*)firstImage afterSecondImage:(UIImage *)secondImage
{
float finalWidth=MAX(firstImage.size.width,secondImage.size.width);
float finalHeight=firstImage.size.height + secondImage.size.height;
CGSize finalSize=CGSizeMake(finalWidth, finalHeight);
UIGraphicsBeginImageContext(finalSize);
[firstImage drawInRect:CGRectMake(0, 0, firstImage.size.width, firstImage.size.height)];
[secondImage drawInRect:CGRectMake(0, firstImage.size.height, secondImage.size.width, secondImage.size.height)];
UIImage *resultImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultImage;
}
You can use it as:
UIImage *resultImage= [self drawFirstImage:image1 afterSecondImage:image2];
Here it is a category implementation:
UIImage+MyExtensions.h
#import <UIKit/UIKit.h>
#interface UIImage (MyExtensions)
-(UIImage *)attachImageBelow:(UIImage *)secondImage;
#end
UIImage+MyExtensions.m
#import "UIImage+MyExtensions.h"
#implementation UIImage (MyExtensions)
-(UIImage *)attachImageBelow:(UIImage *)secondImage{
float finalWidth=MAX(self.size.width,secondImage.size.width);
float finalHeight=self.size.height + secondImage.size.height;
CGSize finalSize=CGSizeMake(finalWidth, finalHeight);
UIGraphicsBeginImageContext(finalSize);
[self drawInRect:CGRectMake(0, 0, self.size.width, self.size.height)];
[secondImage drawInRect:CGRectMake(0, self.size.height, secondImage.size.width, secondImage.size.height)];
UIImage *resultImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultImage;
}
You can use it like this:
#import "UIImage+MyExtensions.h"
UIImage *resultImage= [image1 attachImageBelow:image2];

Dynamic UIView to UIImage is blank

I'm trying to create a UIView where I add 2 UIImageView inside and create an UIImage from this. Here is the method I use, but I get a blank UIImage.
Here is the code I use :
-(UIImage*)getFinalImage{
CGRect rect = CadreImage.frame;
UIView *dynaView = [[UIView alloc] initWithFrame:rect];
UIImageView *frontCadre = [[UIImageView alloc] initWithImage:TheImage];
[dynaView addSubview:frontCadre];
UIGraphicsBeginImageContextWithOptions(dynaView.bounds.size, dynaView.opaque, 0.0);
[dynaView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Any help would be appreciated.
Thanks.
Here is the code I use to grab a layer/screenshot (note this is for the whole screen and not a specific layer + handles retina displays):
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)])
UIGraphicsBeginImageContextWithOptions(self.view.window.bounds.size, NO, [UIScreen mainScreen].scale);
else
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Can you also set a breakpoint and jump through the program to verify that TheImage is not nil?

Taking Screenshot of UIView using UIButton

I am trying to make an app with a button that will take a screenshot of the object drawn on the device's screen and save it on the device's photo gallery...
How will i able to do that? any idea?
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenShotimage = UIGraphicsGetImageFromCurrentImageContext();
UIImageWriteToSavedPhotosAlbum(screenShotimage, nil, nil, nil);
UIGraphicsEndImageContext();
This code will take screen shot of your screen and will save the image in photo gallery
Apple describes a way to do it here : http://developer.apple.com/library/ios/#qa/qa1703/_index.html
- (UIImage*)screenshot
{
// Create a graphics context with the target size
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
// Retrieve the screenshot image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
And remember #import <QuartzCore/QuartzCore.h>