iPhone4 iOS take screenshot of the view without NavBar and TabBar? - objective-c

I got this code to be able to take a screenshot of the view.
UIGraphicsBeginImageContext(scrollView.bounds.size);
[scrollView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData * data = UIImagePNGRepresentation(image);
However, even if I set the context to be 320x480, parts of the scroll view are still not shown. The view that the scrollview manages can perfectly fit within the 320x480 frame, but parts of it are covered by the status bar, navBar and TabBar.
I would like to take full screen (320x480) screenshot of the view with the parts of the view, covered by status bar, TabBar and NavBar visible. Any pointers on how to do this?
An extra question, which may be related: the resulting image is using x1 scale, and looks very blurry on the retina display, which scales takes a larger image and scales it down. This means I'll need to render the 640x960 screenshot to reproduce the original crisp quality. How would I go about doing that?
Thank you!

I found the following on this site: http://www.icodeblog.com/2009/07/27/1188/
UIGraphicsBeginImageContext(YourView.frame.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(viewImage, nil, nil, nil);
You may also want to check out this (Apple's example of how to make a screenshot): http://developer.apple.com/library/ios/#qa/qa1703/_index.html

First make a screenshot of the whole screen:
// Create a graphics context with the target size
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0.0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
// Retrieve the screenshot image
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
Then crop it to the right size
CGImageRef subImageRef = CGImageCreateWithImageInRect(screenshot.CGImage, rect);
CGRect smallBounds = CGRectMake(0, 64, 320, 372); //You should remove the hard coded numbers
UIGraphicsBeginImageContext(smallBounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawImage(context, smallBounds, subImageRef);
UIImage* cropped = [UIImage imageWithCGImage:subImageRef];
UIGraphicsEndImageContext();

Related

Creating retina screenshot programmatically resulting in non retina image

I am trying to take a retina screenshot programmatically and I have tried every approach found online, but I was not able to get the screenshot to be retina.
I understand the following private API:
UIGetScreenImage();
cannot be used as Apple will reject your app. However, this method returns exactly what I need (640x960 screenshot of the screen).
I have tried this method on my iPhone 4 as well as the iPhone 4 simulator on retina hardware, but the resulting image is always 320x480.
-(UIImage *)captureView
{
AppDelegate *appdelegate = [[UIApplication sharedApplication]delegate];
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)])
UIGraphicsBeginImageContextWithOptions(appdelegate.window.bounds.size, NO, 0.0);
else
UIGraphicsBeginImageContext(appdelegate.window.bounds.size);
[appdelegate.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSLog(#"SIZE: %#", NSStringFromCGSize(image.size));
NSLog(#"scale: %f", [UIScreen mainScreen].scale);
return image;
}
I have also tried the Apple recommended way:
- (UIImage*)screenshot
{
// Create a graphics context with the target size
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
// Retrieve the screenshot image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSLog(#"Size: %#", NSStringFromCGSize(image.size));
return image;
}
But it also returns a non retina image:
2012-12-23 19:57:45.205 PostCard[3351:707] size: {320, 480}
Is there something obvious I'm missing? How come there methods that are suppose to take retina screenshot return me non retina screenshots?
Thanks in advance!
I don't see anything wrong in your code. Apart from image.size, have you tried logging image.scale? Is it 1 or 2? If it's 2, it is actually a retina image.
UIImage.scale represents the scale of the image. So an image with UIImage.size being 320×480 and UIImage.scale being 2 has an actual size of 640×960. From Apple's doc:
If you multiply the logical size of the image (stored in the size property) by the value in this property, you get the dimensions of the image in pixels.
It's the same idea as when you load an image into a UIImage with the #2x modifier. For example:
a.png (100×80) => size=100×80 scale=1
b#2x.png (200×160) => size=100×80 scale=2

Capturing Screen of QLPreviewController causing problems in iOS 6

On Apples Technical notes site, they posted code to screen capture. Everything works, except it does not capture files that are being previewed using QLPreviewController under iOS 6.
I'm wondering if they are using some new OpenGL ES to render the file preview so that it could not capture? In theory, this piece of code should be able to capture anything on screen right??
From http://developer.apple.com/library/ios/#qa/qa1703/_index.html
- (UIImage*)screenshot
{
// Create a graphics context with the target size
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
// Retrieve the screenshot image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
this doesnt sound similiar but this actually doesnt work for the same reason that this doesnt work: iOS 6 UIGestures (Tap) stops working with QLPreviewController
since ios 6 the QLPreviewController is actually a completely seperate app (seperate process and everything)
=> so when you push that, your whole app moves to the bg, including its window.
=> QlController is never really part of your window

Screenshot/Snapshot from mapView stores a blank Map

I want to take a screenshot of a MapView and save to photos
This is the used source:
- (IBAction)screenshot:(id)sender {
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)])
UIGraphicsBeginImageContextWithOptions(mapView.frame.size, NO, [UIScreen mainScreen].scale);
else
UIGraphicsBeginImageContext(mapView.frame.size);
[mapView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(viewImage, nil, nil, nil);
}
The action is successful, but the photo looks like this here
MapView Screenshot
I do not know what is wrong. I've already tried some codes. All with the same result. If I make a screenshot of the entire view, the map also looks like the picture above.
Does anyone have any idea or can help me?
EDIT:
- (UIImage*) ImageFromMapView
{
UIGraphicsBeginImageContext(self.frame.size);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
//[[[yourmapView.layer sublayers] objectAtIndex:1] renderInContext:UIGraphicsGetCurrentContext()]; try this upper fails
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Try using UIGraphicsBeginImageContextWithOptions instead of UIGraphicsBeginImageContext:
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, 0.0);
Note: Starting from iOS 4, UIGraphicsBeginImageContextWithOptions allows you to provide with a scale factor. A scale factor of zero sets it to the scale factor of the device's main screen. This enables you to get the sharpest, highest-resolustion snapshot of the display, including a Retina Display.

iOS Capture "screenshot" of camera controller

In my app I display the camera and I am taking screenshots of certain perts using UIGetScreenImage, (I tried UIGraphicsGetImageFromCurrentImageContext and it works great for screenshots on almost any part of my app but for the camera view it will just return a blank white image) ... Anyways, I fear Apple will reject my app because of UIGetScreenImage... How can I take a "screenshot" of a 50px by 50px box from the upper left corner of the camera without using this method? I searched and all I could find was "AVCaptureSession" and I couldn't find much about what that does, or if it's even what I'm looking for... Any insight? :) Thanks guys!!!
It doesn't get much clearer than Apple's docs on how to capture the camera's view. Yes, this does involve the class AVCaptureSession.
If you actually need a screenshot of the interface, you should take at the docs for that. Cut-and-paste code from the link (if this does not work, you should submit a bug report to Apple):
Update: It appears that this approach is no longer supported on newer versions of iOS. The second link is now broken as well.
- (UIImage*)screenshot
{
// Create a graphics context with the target size
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
// Retrieve the screenshot image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Since iOS7 you can use :
drawViewHierarchyInRect
UIImage *image;
UIGraphicsBeginImageContext(self.view.frame.size);
[self.view drawViewHierarchyInRect:self.view.frame afterScreenUpdates:YES];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Image Context produces artifacts when compositing UIImages

I'm trying to overlay a custom semi-transparent image over a base image. The overlay image is stretchable and created like this:
[[UIImage imageNamed:#"overlay.png"] stretchableImageWithLeftCapWidth:5.0 topCapHeight:5.0]
Then I pass that off to a method that overlays it onto the background image for a button:
- (void)overlayImage:(UIImage *)overlay forState:(UIControlState)state {
UIImage *baseImage = [self backgroundImageForState:state];
CGRect frame = CGRectZero;
frame.size = baseImage.size;
// create a new image context
UIGraphicsBeginImageContext(baseImage.size);
// get context
CGContextRef context = UIGraphicsGetCurrentContext();
// clear context
CGContextClearRect(context, frame);
// draw images
[baseImage drawInRect:frame];
[overlay drawInRect:frame];// blendMode:kCGBlendModeNormal alpha:1.0];
// get UIImage
UIImage *overlaidImage = UIGraphicsGetImageFromCurrentImageContext();
// clean up context
UIGraphicsEndImageContext();
[self setBackgroundImage:overlaidImage forState:state];
}
The resulting overlaidImage looks mostly correct, it is the correct size, the alpha is blended correctly, etc. however it has vertical artifacts/noise.
UIImage artifacts example http://acaciatreesoftware.com/img/UIImage-artifacts.png
(example at http://acaciatreesoftware.com/img/UIImage-artifacts.png)
I tried clearing the context first and then turning off PNG compression--which reduces the artifacting some (completely on non stretched images I think).
Does anyone know a method for drawing stretchable UIImages with out this sort of artifacting happening?
So the answer is: Don't do this. Instead you can paint your overlay procedurally. Like so:
- (void)overlayWithColor:(UIColor *)overlayColor forState:(UIControlState)state {
UIImage *baseImage = [self backgroundImageForState:state];
CGRect frame = CGRectZero;
frame.size = baseImage.size;
// create a new image context
UIGraphicsBeginImageContext(baseImage.size);
// get context
CGContextRef context = UIGraphicsGetCurrentContext();
// draw background image
[baseImage drawInRect:frame];
// overlay color
CGContextSetFillColorWithColor(context, [overlayColor CGColor]);
CGContextSetBlendMode(context, kCGBlendModeSourceAtop);
CGContextFillRect(context, frame);
// get UIImage
UIImage *overlaidImage = UIGraphicsGetImageFromCurrentImageContext();
// clean up context
UIGraphicsEndImageContext();
[self setBackgroundImage:overlaidImage forState:state];
}
Are you being too miserly with your original image, and forcing it to stretch rather than shrink? I've found best results out of images that fit the same aspect ratio and were reduced in size. Might not solve your problem tho.