taking screenshot programmaticallyin ios7 - ios7

This the code I use
UIScreen *screen = [UIScreen mainScreen] ;
UIWindow *keyWindow = [[UIApplication sharedApplication] keyWindow];
UIView *view = [screen snapshotViewAfterScreenUpdates:YES];
UIGraphicsBeginImageContextWithOptions(screen.bounds.size, NO, 0);
[keyWindow drawViewHierarchyInRect:keyWindow.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *data= UIImagePNGRepresentation(image);
Then I send the data to SLComposeViewController
[mySLComposerSheet addImage:[UIImage imageWithData:data]];
Everything is fine but the image is rotated, my app is only in landscape mode but the screenshot is vertical

It's because you snapshot UIWindow, who don't handle rotation like UIView / UIViewController. You have to handle rotation yourself.
Try with this code:
- (UIImage *)makeSnapshot
{
CGSize imageSize = CGSizeZero;
UIInterfaceOrientation orientation = [UIApplication sharedApplication].statusBarOrientation;
if (UIInterfaceOrientationIsPortrait(orientation)) {
imageSize = [UIScreen mainScreen].bounds.size;
} else {
imageSize = CGSizeMake([UIScreen mainScreen].bounds.size.height, [UIScreen mainScreen].bounds.size.width);
}
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
for (UIWindow *window in [[UIApplication sharedApplication] windows]) {
CGContextSaveGState(context);
CGContextTranslateCTM(context, window.center.x, window.center.y);
CGContextConcatCTM(context, window.transform);
CGContextTranslateCTM(context, -window.bounds.size.width * window.layer.anchorPoint.x, -window.bounds.size.height * window.layer.anchorPoint.y);
if (orientation == UIInterfaceOrientationLandscapeLeft) {
CGContextRotateCTM(context, M_PI_2);
CGContextTranslateCTM(context, 0, -imageSize.width);
} else if (orientation == UIInterfaceOrientationLandscapeRight) {
CGContextRotateCTM(context, -M_PI_2);
CGContextTranslateCTM(context, -imageSize.height, 0);
} else if (orientation == UIInterfaceOrientationPortraitUpsideDown) {
CGContextRotateCTM(context, M_PI);
CGContextTranslateCTM(context, -imageSize.width, -imageSize.height);
}
if ([window respondsToSelector:#selector(drawViewHierarchyInRect:afterScreenUpdates:)]) {
[window drawViewHierarchyInRect:window.bounds afterScreenUpdates:YES];
} else {
[window.layer renderInContext:context];
}
CGContextRestoreGState(context);
}
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}

Related

I can't draw continuous lines on another image using PAN GESTURE

The code which I used to draw on images is as given below, I am using panGesture to find where the user touches. Now when I use this code the lines the user draws comes as points when i am moving my hands over the image very fast.
UIGraphicsBeginImageContextWithOptions((self.thatsMyImage.frame.size), NO, 0.0);
[self.selfieImage.image drawInRect:CGRectMake(0,0,self.thatsMyImage.frame.size.width, self.thatsMyImage.frame.size.height)];
[[UIColor whiteColor] set];
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), from.x, from.y);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), to.x , to.y);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), 10.0f);
CGContextStrokeEllipseInRect(UIGraphicsGetCurrentContext(), CGRectMake(to.x, to.y,10,10));
CGContextSetFillColor(UIGraphicsGetCurrentContext(), CGColorGetComponents([[UIColor blueColor] CGColor]));
CGContextFillPath(UIGraphicsGetCurrentContext());
CGContextStrokePath(UIGraphicsGetCurrentContext());
self.thatsMyImage.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
This is the method which is being called when panGesture is detected.
-(void)freeFormDrawing:(UIPanGestureRecognizer *)gesture
{
if(gesture.state == UIGestureRecognizerStateChanged)
{
CGPoint p = [gesture locationInView:self.selfieImage];
CGPoint startPoint = lastPoint;
lastPoint = [gesture locationInView:self.selfieImage];
[self drawLineFrom:startPoint endPoint:p];
}
if(gesture.state == UIGestureRecognizerStateEnded)
{
// lastPoint = [gesture locationInView:self.selfieImage];
}
}
Can anybody please tell me how can i do free-form (Doodle) on images with smooth lines and curves? Thanks in advance and Happy Coding!
I had implemented the same what I had was that I was adding a transparent UIImageView above the UIImageView that had my UIImage I wanted to draw on.
UIImageView *drawableView = [[UIImageView alloc]initWithFrame:self.drawImageView.bounds];
drawableView.userInteractionEnabled = YES;
UIPanGestureRecognizer *panGesture = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(drawingViewDidPan:)];
[drawableView addGestureRecognizer:panGesture];
Then when the user panned on the UIImageView I would call this function
- (void)drawingViewDidPan:(UIPanGestureRecognizer*)sender
{
CGPoint currentDraggingPosition = [sender locationInView:drawableView];
if(sender.state == UIGestureRecognizerStateBegan) {
prevDraggingPosition = currentDraggingPosition;
}
if(sender.state != UIGestureRecognizerStateEnded){
[self drawLine:prevDraggingPosition to:currentDraggingPosition];
}
prevDraggingPosition = currentDraggingPosition;
}
Both prevDraggingPosition and currentDraggingPosition are CGPoint.
Then I used the following function to draw line from the prevDraggingPosition to currentDraggingPosition
-(void)drawLine:(CGPoint)from to:(CGPoint)to
{
#autoreleasepool {
CGSize size = drawableView.frame.size;
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
[drawableView.image drawAtPoint:CGPointZero];
CGFloat strokeWidth = 4.0;
strokeColor = colorChangeView.backgroundColor;
CGContextSetLineWidth(context, strokeWidth);
CGContextSetStrokeColorWithColor(context, strokeColor.CGColor);
CGContextSetLineCap(context, kCGLineCapRound);
CGContextMoveToPoint(context, from.x, from.y);
CGContextAddLineToPoint(context, to.x, to.y);
CGContextStrokePath(context);
drawableView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
}
Then finally to get the image with the drawing on it you can build the image by drawing the drawableView image onto your UIImageView that has your image like this.
- (UIImage*)buildImage
{
#autoreleasepool {
UIGraphicsBeginImageContextWithOptions(originalImageSize, NO, self.drawImageView.image.scale);
[self.drawImageView.image drawAtPoint:CGPointZero];
[drawableView.image drawInRect:CGRectMake(0,0, originalImageSize.width, originalImageSize.height)];
UIImage *tmp = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return tmp;
}
}
Where originalImageSize is the size of your image.
Hope this helps!

Calculate rect in UIView after zoom in out and rotate and pan

I need to build a crop image and let the user pinch in\out and rotate the image.
I have the parent UIView that hold the UIImageView.
parent UIView is clipsToBounds = YES
the inner UIImageView is with contentMode = UIViewContentModeScaleAspectFill;
I don't understand how to calculate the crop rect.
this the parent view before pick image :
this is after the UIImageView add to parentView with frame (0,0,parent.width ,parent.height) and with UIViewContentModeScaleAspectFill.
this is after pinch in :
this is my crop code.
-(UIImage*)createImageToPost:(UIImageView*)imageView withParentView:(UIView*)paretView
{
CGRect rectToCrop = CGRectMake(paretView.frame.origin.x - imageView.frame.origin.x , paretView.frame.origin.y - imageView.frame.origin.y, fabs(paretView.frame.size.width - imageView.image.size.width), fabs(paretView.frame.size.height - imageView.image.size.height));
UIImage * newCropImage = [self cropWithImageView:imageView withCropRect:rectToCrop];
return newCropImage;
}
-(UIImage*)cropWithImageView:(UIImageView*)imageView withCropRect:(CGRect)cropRect
{
CGRect rect = CGRectMake(0, 0, imageView.image.size.width, imageView.image.size.height);
// Begin the drawing
UIGraphicsBeginImageContext(imageView.image.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
// Clear whole thing
CGContextClearRect(ctx, rect);
// Transform the image (as the image view has been transformed)
CGContextTranslateCTM(ctx, rect.size.width*0.5, rect.size.height*0.5);
CGContextConcatCTM(ctx, imageView.transform);
CGContextTranslateCTM(ctx, -rect.size.width*0.5, -rect.size.height*0.5);
// Tanslate and scale upside-down to compensate for Quartz's inverted coordinate system
CGContextTranslateCTM(ctx, 0.0, rect.size.height);
CGContextScaleCTM(ctx, 1.0, -1.0);
// Draw view into context
CGContextDrawImage(ctx, rect, imageView.image.CGImage);
// Create the new UIImage from the context
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
// End the drawing
UIGraphicsEndImageContext();
// Begin the drawing (again)
UIGraphicsBeginImageContext(cropRect.size);
ctx = UIGraphicsGetCurrentContext();
// Clear whole thing
CGContextClearRect(ctx, CGRectMake(0, 0, cropRect.size.width, cropRect.size.height));
// Translate to compensate for the different positions of the image
CGContextTranslateCTM(ctx, -((newImage.size.width*0.5)-(cropRect.size.width*0.5)),
(newImage.size.height*0.5)-(cropRect.size.height*0.5));
// Tanslate and scale upside-down to compensate for Quartz's inverted coordinate system
CGContextTranslateCTM(ctx, 0.0, cropRect.size.height);
CGContextScaleCTM(ctx, 1.0, -1.0);
// Draw view into context
CGContextDrawImage(ctx, CGRectMake(0,0,newImage.size.width,newImage.size.height), newImage.CGImage);
// Create the new UIImage from the context
newImage = UIGraphicsGetImageFromCurrentImageContext();
// End the drawing
UIGraphicsEndImageContext();
return newImage;
}
UIPinchGestureRecognizer *pinchGestureRecognizer = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(pinchGestureDetected:)];
[pinchGestureRecognizer setDelegate:self];
[imageView addGestureRecognizer:pinchGestureRecognizer];
// create and configure the rotation gesture
UIRotationGestureRecognizer *rotationGestureRecognizer = [[UIRotationGestureRecognizer alloc] initWithTarget:self action:#selector(rotationGestureDetected:)];
[rotationGestureRecognizer setDelegate:self];
[imageView addGestureRecognizer:rotationGestureRecognizer];
// creat and configure the pan gesture
UIPanGestureRecognizer *panGestureRecognizer = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(panGestureDetected:)];
[panGestureRecognizer setDelegate:self];
[imageView addGestureRecognizer:panGestureRecognizer];
- (void)pinchGestureDetected:(UIPinchGestureRecognizer *)recognizer
{
UIGestureRecognizerState state = [recognizer state];
if (state == UIGestureRecognizerStateBegan || state == UIGestureRecognizerStateChanged)
{
CGFloat scale = [recognizer scale];
[recognizer.view setTransform:CGAffineTransformScale(recognizer.view.transform, scale, scale)];
[recognizer setScale:1.0];
}
}
- (void)rotationGestureDetected:(UIRotationGestureRecognizer *)recognizer
{
UIGestureRecognizerState state = [recognizer state];
if (state == UIGestureRecognizerStateBegan || state == UIGestureRecognizerStateChanged)
{
CGFloat rotation = [recognizer rotation];
[recognizer.view setTransform:CGAffineTransformRotate(recognizer.view.transform, rotation)];
[recognizer setRotation:0];
}
}
- (void)panGestureDetected:(UIPanGestureRecognizer *)recognizer
{
UIGestureRecognizerState state = [recognizer state];
if (state == UIGestureRecognizerStateBegan || state == UIGestureRecognizerStateChanged)
{
CGPoint translation = [recognizer translationInView:recognizer.view];
[recognizer.view setTransform:CGAffineTransformTranslate(recognizer.view.transform, translation.x, translation.y)];
[recognizer setTranslation:CGPointZero inView:recognizer.view];
}
}
this is the image i get after crop:

How to fix translucent toolbar screenshot showing as black in iOS 7?

My top and bottom toolbars are showing as black when in the app they show as light gray. I'm assuming it may have something to do with the default 'translucency checkbox' I found in the .storyboard, attributes inspector of the toolbar?
I'm using a UIActivityViewController to shoot this image out in an email. Here's the code for grabbing the screenshot:
// Grab a screenshot
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *imageToShare = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Solution found here: iOS: what's the fastest, most performant way to make a screenshot programmatically?
Seems like this should be a factory method provided by an Apple framework though. I'm putting this a Utils.h/.m class:
+ (UIImage *)screenshot
{
CGSize imageSize = CGSizeZero;
UIInterfaceOrientation orientation = [UIApplication sharedApplication].statusBarOrientation;
if (UIInterfaceOrientationIsPortrait(orientation)) {
imageSize = [UIScreen mainScreen].bounds.size;
} else {
imageSize = CGSizeMake([UIScreen mainScreen].bounds.size.height, [UIScreen mainScreen].bounds.size.width);
}
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
for (UIWindow *window in [[UIApplication sharedApplication] windows]) {
CGContextSaveGState(context);
CGContextTranslateCTM(context, window.center.x, window.center.y);
CGContextConcatCTM(context, window.transform);
CGContextTranslateCTM(context, -window.bounds.size.width * window.layer.anchorPoint.x, -window.bounds.size.height * window.layer.anchorPoint.y);
if (orientation == UIInterfaceOrientationLandscapeLeft) {
CGContextRotateCTM(context, M_PI_2);
CGContextTranslateCTM(context, 0, -imageSize.width);
} else if (orientation == UIInterfaceOrientationLandscapeRight) {
CGContextRotateCTM(context, -M_PI_2);
CGContextTranslateCTM(context, -imageSize.height, 0);
} else if (orientation == UIInterfaceOrientationPortraitUpsideDown) {
CGContextRotateCTM(context, M_PI);
CGContextTranslateCTM(context, -imageSize.width, -imageSize.height);
}
if ([window respondsToSelector:#selector(drawViewHierarchyInRect:afterScreenUpdates:)]) {
[window drawViewHierarchyInRect:window.bounds afterScreenUpdates:YES];
} else {
[window.layer renderInContext:context];
}
CGContextRestoreGState(context);
}
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
You should probably set the color you on your layer before rendering it:
self.view.backgroundColor = [UIColor whateverColorYouWant];
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *imageToShare = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Take a look at that anwser

ios 7 when hide tabbar only in landscape view some space remaining black

In my app in portrait view tabbar is show and for landscape view it set as hidden, this working in ios6 smoothly but for iOS7 in Landscape view tabbar is hidden but its space is remaining as it is. Following my code.
-(void)willAnimateRotationToInterfaceOrientation:(UIInterfaceOrientation)toInterfaceOrientation duration:(NSTimeInterval)duration
{
if(![UICommonUtils isiPad]){
if(toInterfaceOrientation == UIDeviceOrientationLandscapeRight || toInterfaceOrientation == UIDeviceOrientationLandscapeLeft) {
[UICommonUtils hideTabBar:self.tabBarController];
} else if(toInterfaceOrientation == UIDeviceOrientationPortrait || toInterfaceOrientation == UIDeviceOrientationPortraitUpsideDown){
[UICommonUtils showTabBar:self.tabBarController];
}
}
}
Hide Tabbar
+(void)hideTabBar:(UITabBarController *)tabbarcontroller
{
CGRect screenRect = [[UIScreen mainScreen] bounds];
float fHeight = screenRect.size.height;
if( UIInterfaceOrientationIsLandscape([UIApplication sharedApplication].statusBarOrientation) )
{
fHeight = screenRect.size.width;
}
for(UIView *view in tabbarcontroller.view.subviews)
{
if([view isKindOfClass:[UITabBar class]])
{
[view setFrame:CGRectMake(view.frame.origin.x, fHeight, view.frame.size.width, view.frame.size.height)];
}
else
{
[view setFrame:CGRectMake(view.frame.origin.x, view.frame.origin.y, view.frame.size.width, fHeight)];
view.backgroundColor = [UIColor blackColor];
}
}
}
Show Tabbar
+(void)showTabBar:(UITabBarController *) tabbarcontroller
{
CGRect screenRect = [[UIScreen mainScreen] bounds];
float fHeight = screenRect.size.height - 49.0;
if( UIInterfaceOrientationIsLandscape([UIApplication sharedApplication].statusBarOrientation) )
{
fHeight = screenRect.size.width - 49.0;
}
for(UIView *view in tabbarcontroller.view.subviews)
{
if([view isKindOfClass:[UITabBar class]])
{
[view setFrame:CGRectMake(view.frame.origin.x, fHeight, view.frame.size.width, view.frame.size.height)];
}
else
{
[view setFrame:CGRectMake(view.frame.origin.x, view.frame.origin.y, view.frame.size.width, fHeight)];
}
}
}
Add following line of the code in viewDidLoad method :
if ([self respondsToSelector:#selector(edgesForExtendedLayout)])
{
self.edgesForExtendedLayout = UIRectEdgeNone;
}

Rotating a full screen UIImageView

I have two images:
Help-Portrait.png (320 x 480)
Help-Landscape.png (480 x 320)
When a user clicks the help button on any view, they need to be presented with the correct image, which should also rotate when the device does. I have tried adding the imageView to both the window, and the navigation controller view.
For some reason I am having issues with this.
Could anyone shed light on what I am doing wrong?
UIImage *image = nil;
CGRect frame;
if (UIInterfaceOrientationIsPortrait([[UIApplication sharedApplication] statusBarOrientation])) {
image = [UIImage imageNamed:#"Help-Portrait.png"];
frame = CGRectMake(0, 0, 320, 480);
} else {
image = [UIImage imageNamed:#"Help-Landscape.png"];
frame = CGRectMake(0, 0, 480, 320);
}
if (!helpImageView) {
helpImageView = [[UIImageView alloc] initWithFrame:frame];
helpImageView.autoresizingMask = UIViewAutoresizingFlexibleHeight | UIViewAutoresizingFlexibleWidth;
helpImageView.image = image;
}
[[UIApplication sharedApplication] setStatusBarHidden:YES];
UITapGestureRecognizer *tap = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(helpImageTapped:)];
helpImageView.userInteractionEnabled = YES;
[helpImageView addGestureRecognizer:tap];
[self.view addSubview:helpImageView];
[tap release];
willRotateToInterfaceOrientation:
if(helpImageView) {
[(id)[UIApplication sharedApplication] setStatusBarHidden:NO animated:YES];
if (UIInterfaceOrientationIsPortrait(toInterfaceOrientation)) {
helpImageView.image = [UIImage imageNamed:#"Help-Portrait.png"];
} else {
helpImageView.image = [UIImage imageNamed:#"Help-Landscape.png"];
}
}
When you rotate the device the image and the frame don't change, and you end up with two thirds of the portrait image displayed on the left part of the screen.
What I want is it for it to show the correct image for the orientation, the right way up. Also I would like animation for the image rotation, but thats a side issue
The place where you need to adjust your button image is in your ViewController's shouldAutorotateToInterfaceOrientation method (documentation linked for you).
Do something like:
- (BOOL)shouldAutorotateToInterfaceOrientation: (UIInterfaceOrientation)interfaceOrientation
{
UIImage *image = NULL;
if (UIInterfaceOrientationIsPortrait(interfaceOrientation))
{
image = [UIImage imageNamed:#"Help-Portrait.png"];
} else {
image = [UIImage imageNamed:#"Help-Landscape.png"];
}
[yourButton setImage: image forState: UIControlStateNormal]
return YES;
}
Michael Dautermann's answer looks to have almost all the answer, but I'm opposed to using shouldAutorotateToInterfaceOrientation. This method is designed only to determine if a rotation should or should not occur, nothing else.
You should use either didRotateFromInterfaceOrientation: willAnimateRotationToInterfaceOrientation:duration instead.
didRotateFromInterfaceOrientation: - interfaceOrientation is already set on your UIViewController so you can get the current orientation. In this case the rotation animation is already complete.
willAnimateRotationToInterfaceOrientation:duration - The benefit of this method is execution time. You are inside the rotation animation so you won't have the less than pretty effects which happens when you change UI either after the rotation animation completes.
Got it working, with this code:
- (void)showHelpImage {
NSString *imageName = #"Help_Portrait.png";
CGRect imageFrame = CGRectMake(0, 0, 320, 480);
helpImageView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:imageName]];
helpImageView.frame = imageFrame;
[self.view addSubview:helpImageView];
[self updateHelpImageForOrientation:self.interfaceOrientation];
UITapGestureRecognizer *tap = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(helpImageTapped:)];
helpImageView.userInteractionEnabled = YES;
[helpImageView addGestureRecognizer:tap];
[self.view addSubview:helpImageView];
[tap release];
}
- (void)willRotateToInterfaceOrientation:(UIInterfaceOrientation)toInterfaceOrientation duration:(NSTimeInterval)duration
{
[self updateHelpImageForOrientation:toInterfaceOrientation];
}
- (void)updateHelpImageForOrientation:(UIInterfaceOrientation)orientation {
NSString *imageName = nil;
CGRect imageFrame = helpImageView.frame;
if (orientation == UIInterfaceOrientationPortrait || orientation == UIInterfaceOrientationPortraitUpsideDown) {
imageName = #"Help_Portrait.png";
imageFrame = CGRectMake( 0, 0, 320, 480);
} else if (orientation == UIInterfaceOrientationLandscapeLeft || orientation == UIInterfaceOrientationLandscapeRight) {
imageName = #"Help_Landscape.png";
imageFrame = CGRectMake( 0, 0, 480, 320);
}
helpImageView.image = [UIImage imageNamed:imageName];
helpImageView.frame = imageFrame;
}
Got the idea from:
http://www.dobervich.com/2010/10/22/fade-out-default-ipad-app-image-with-proper-orientation/